From f4193c3b5c013df00b6d05805bf1cc99bebe02bf Mon Sep 17 00:00:00 2001 From: Josh Soref Date: Mon, 18 Jan 2021 07:43:43 -0500 Subject: Spelling md (#10508) * spelling: activity Signed-off-by: Josh Soref * spelling: adding Signed-off-by: Josh Soref * spelling: addresses Signed-off-by: Josh Soref * spelling: administrators Signed-off-by: Josh Soref * spelling: alarm Signed-off-by: Josh Soref * spelling: alignment Signed-off-by: Josh Soref * spelling: analyzing Signed-off-by: Josh Soref * spelling: apcupsd Signed-off-by: Josh Soref * spelling: apply Signed-off-by: Josh Soref * spelling: around Signed-off-by: Josh Soref * spelling: associated Signed-off-by: Josh Soref * spelling: automatically Signed-off-by: Josh Soref * spelling: availability Signed-off-by: Josh Soref * spelling: background Signed-off-by: Josh Soref * spelling: bandwidth Signed-off-by: Josh Soref * spelling: berkeley Signed-off-by: Josh Soref * spelling: between Signed-off-by: Josh Soref * spelling: celsius Signed-off-by: Josh Soref * spelling: centos Signed-off-by: Josh Soref * spelling: certificate Signed-off-by: Josh Soref * spelling: cockroach Signed-off-by: Josh Soref * spelling: collectors Signed-off-by: Josh Soref * spelling: concatenation Signed-off-by: Josh Soref * spelling: configuration Signed-off-by: Josh Soref * spelling: configured Signed-off-by: Josh Soref * spelling: continuous Signed-off-by: Josh Soref * spelling: correctly Signed-off-by: Josh Soref * spelling: corresponding Signed-off-by: Josh Soref * spelling: cyberpower Signed-off-by: Josh Soref * spelling: daemon Signed-off-by: Josh Soref * spelling: dashboard Signed-off-by: Josh Soref * spelling: database Signed-off-by: Josh Soref * spelling: deactivating Signed-off-by: Josh Soref * spelling: dependencies Signed-off-by: Josh Soref * spelling: deployment Signed-off-by: Josh Soref * spelling: determine Signed-off-by: Josh Soref * spelling: downloading Signed-off-by: Josh Soref * spelling: either Signed-off-by: Josh Soref * spelling: electric Signed-off-by: Josh Soref * spelling: entity Signed-off-by: Josh Soref * spelling: entrant Signed-off-by: Josh Soref * spelling: enumerating Signed-off-by: Josh Soref * spelling: environment Signed-off-by: Josh Soref * spelling: equivalent Signed-off-by: Josh Soref * spelling: etsy Signed-off-by: Josh Soref * spelling: everything Signed-off-by: Josh Soref * spelling: examining Signed-off-by: Josh Soref * spelling: expectations Signed-off-by: Josh Soref * spelling: explicit Signed-off-by: Josh Soref * spelling: explicitly Signed-off-by: Josh Soref * spelling: finally Signed-off-by: Josh Soref * spelling: flexible Signed-off-by: Josh Soref * spelling: further Signed-off-by: Josh Soref * spelling: hddtemp Signed-off-by: Josh Soref * spelling: humidity Signed-off-by: Josh Soref * spelling: identify Signed-off-by: Josh Soref * spelling: importance Signed-off-by: Josh Soref * spelling: incoming Signed-off-by: Josh Soref * spelling: individual Signed-off-by: Josh Soref * spelling: initiate Signed-off-by: Josh Soref * spelling: installation Signed-off-by: Josh Soref * spelling: integration Signed-off-by: Josh Soref * spelling: integrity Signed-off-by: Josh Soref * spelling: involuntary Signed-off-by: Josh Soref * spelling: issues Signed-off-by: Josh Soref * spelling: kernel Signed-off-by: Josh Soref * spelling: language Signed-off-by: Josh Soref * spelling: libwebsockets Signed-off-by: Josh Soref * spelling: lighttpd Signed-off-by: Josh Soref * spelling: maintained Signed-off-by: Josh Soref * spelling: meaningful Signed-off-by: Josh Soref * spelling: memory Signed-off-by: Josh Soref * spelling: metrics Signed-off-by: Josh Soref * spelling: miscellaneous Signed-off-by: Josh Soref * spelling: monitoring Signed-off-by: Josh Soref * spelling: monitors Signed-off-by: Josh Soref * spelling: monolithic Signed-off-by: Josh Soref * spelling: multi Signed-off-by: Josh Soref * spelling: multiplier Signed-off-by: Josh Soref * spelling: navigation Signed-off-by: Josh Soref * spelling: noisy Signed-off-by: Josh Soref * spelling: number Signed-off-by: Josh Soref * spelling: observing Signed-off-by: Josh Soref * spelling: omitted Signed-off-by: Josh Soref * spelling: orchestrator Signed-off-by: Josh Soref * spelling: overall Signed-off-by: Josh Soref * spelling: overridden Signed-off-by: Josh Soref * spelling: package Signed-off-by: Josh Soref * spelling: packages Signed-off-by: Josh Soref * spelling: packet Signed-off-by: Josh Soref * spelling: pages Signed-off-by: Josh Soref * spelling: parameter Signed-off-by: Josh Soref * spelling: parsable Signed-off-by: Josh Soref * spelling: percentage Signed-off-by: Josh Soref * spelling: perfect Signed-off-by: Josh Soref * spelling: phpfpm Signed-off-by: Josh Soref * spelling: platform Signed-off-by: Josh Soref * spelling: preferred Signed-off-by: Josh Soref * spelling: prioritize Signed-off-by: Josh Soref * spelling: probabilities Signed-off-by: Josh Soref * spelling: process Signed-off-by: Josh Soref * spelling: processes Signed-off-by: Josh Soref * spelling: program Signed-off-by: Josh Soref * spelling: qos Signed-off-by: Josh Soref * spelling: quick Signed-off-by: Josh Soref * spelling: raspberry Signed-off-by: Josh Soref * spelling: received Signed-off-by: Josh Soref * spelling: recvfile Signed-off-by: Josh Soref * spelling: red hat Signed-off-by: Josh Soref * spelling: relatively Signed-off-by: Josh Soref * spelling: reliability Signed-off-by: Josh Soref * spelling: repository Signed-off-by: Josh Soref * spelling: requested Signed-off-by: Josh Soref * spelling: requests Signed-off-by: Josh Soref * spelling: retrieved Signed-off-by: Josh Soref * spelling: scenarios Signed-off-by: Josh Soref * spelling: see all Signed-off-by: Josh Soref * spelling: supported Signed-off-by: Josh Soref * spelling: supports Signed-off-by: Josh Soref * spelling: temporary Signed-off-by: Josh Soref * spelling: tsdb Signed-off-by: Josh Soref * spelling: tutorial Signed-off-by: Josh Soref * spelling: updates Signed-off-by: Josh Soref * spelling: utilization Signed-off-by: Josh Soref * spelling: value Signed-off-by: Josh Soref * spelling: variables Signed-off-by: Josh Soref * spelling: visualize Signed-off-by: Josh Soref * spelling: voluntary Signed-off-by: Josh Soref * spelling: your Signed-off-by: Josh Soref --- .travis/README.md | 4 ++-- BUILD.md | 6 +++--- CHANGELOG.md | 8 ++++---- HISTORICAL_CHANGELOG.md | 14 +++++++------- backends/TIMESCALE.md | 2 +- backends/opentsdb/README.md | 2 +- build_external/README.md | 2 +- claim/README.md | 2 +- collectors/COLLECTORS.md | 10 +++++----- collectors/README.md | 4 ++-- collectors/REFERENCE.md | 4 ++-- collectors/apps.plugin/README.md | 2 +- collectors/cgroups.plugin/README.md | 2 +- collectors/ebpf.plugin/README.md | 4 ++-- collectors/freeipmi.plugin/README.md | 2 +- collectors/node.d.plugin/stiebeleltron/README.md | 4 ++-- collectors/perf.plugin/README.md | 2 +- collectors/plugins.d/README.md | 2 +- collectors/python.d.plugin/am2320/README.md | 6 +++--- collectors/python.d.plugin/anomalies/README.md | 14 +++++++------- collectors/python.d.plugin/dovecot/README.md | 4 ++-- collectors/python.d.plugin/go_expvar/README.md | 2 +- collectors/python.d.plugin/mongodb/README.md | 2 +- collectors/python.d.plugin/mysql/README.md | 6 +++--- collectors/python.d.plugin/postgres/README.md | 2 +- collectors/python.d.plugin/proxysql/README.md | 4 ++-- collectors/python.d.plugin/samba/README.md | 2 +- collectors/python.d.plugin/springboot/README.md | 2 +- collectors/statsd.plugin/README.md | 4 ++-- collectors/tc.plugin/README.md | 2 +- daemon/README.md | 2 +- daemon/config/README.md | 4 ++-- docs/Running-behind-apache.md | 2 +- docs/collect/container-metrics.md | 2 +- docs/collect/enable-configure.md | 6 +++--- docs/collect/how-collectors-work.md | 2 +- docs/export/external-databases.md | 2 +- docs/guides/collect-apache-nginx-web-logs.md | 2 +- docs/guides/longer-metrics-storage.md | 2 +- docs/guides/monitor/kubernetes-k8s-netdata.md | 4 ++-- docs/guides/monitor/pi-hole-raspberry-pi.md | 4 ++-- docs/guides/monitor/process.md | 10 +++++----- docs/guides/step-by-step/step-05.md | 2 +- docs/guides/step-by-step/step-06.md | 2 +- docs/guides/step-by-step/step-08.md | 2 +- docs/guides/step-by-step/step-09.md | 2 +- docs/guides/step-by-step/step-10.md | 2 +- .../monitor-debug-applications-ebpf.md | 4 ++-- docs/netdata-security.md | 4 ++-- exporting/README.md | 6 +++--- exporting/TIMESCALE.md | 2 +- health/REFERENCE.md | 6 +++--- health/notifications/alerta/README.md | 4 ++-- health/notifications/awssns/README.md | 2 +- health/notifications/email/README.md | 2 +- health/notifications/irc/README.md | 2 +- health/notifications/prowl/README.md | 4 ++-- health/notifications/rocketchat/README.md | 2 +- health/notifications/stackpulse/README.md | 4 ++-- packaging/DISTRIBUTIONS.md | 2 +- packaging/docker/README.md | 4 ++-- packaging/installer/REINSTALL.md | 2 +- packaging/installer/UPDATE.md | 4 ++-- packaging/installer/methods/cloud-providers.md | 2 +- packaging/installer/methods/freebsd.md | 4 ++-- packaging/installer/methods/kickstart-64.md | 6 +++--- packaging/installer/methods/kickstart.md | 2 +- packaging/installer/methods/kubernetes.md | 6 +++--- packaging/installer/methods/manual.md | 4 ++-- packaging/installer/methods/packages.md | 4 ++-- packaging/installer/methods/pfsense.md | 2 +- packaging/installer/methods/source.md | 22 +++++++++++----------- packaging/maintainers/README.md | 2 +- packaging/makeself/README.md | 2 +- parser/README.md | 6 +++--- streaming/README.md | 2 +- web/api/badges/README.md | 10 +++++----- web/api/health/README.md | 2 +- web/gui/custom/README.md | 2 +- web/server/static/README.md | 2 +- 80 files changed, 156 insertions(+), 156 deletions(-) diff --git a/.travis/README.md b/.travis/README.md index 4ffdbe6ae9..8927dd4c54 100644 --- a/.travis/README.md +++ b/.travis/README.md @@ -66,7 +66,7 @@ Briefly our activities include: ## Artifacts validation At this point we know our software is building, we need to go through the a set of checks, to guarantee -that our product meets certain epxectations. At the current stage, we are focusing on basic capabilities +that our product meets certain expectations. At the current stage, we are focusing on basic capabilities like installing in different distributions, running the full lifecycle of install-run-update-install and so on. We are still working on enriching this with more and more use cases, to get us closer to achieving full stability of our software. Briefly we currently evaluate the following activities: @@ -121,7 +121,7 @@ The following distributions are supported - Bionic - artful -- Enterprise Linux versions (Covers Redhat, CentOS, and Amazon Linux with version 6) +- Enterprise Linux versions (Covers Red Hat, CentOS, and Amazon Linux with version 6) - Version 8 (TBD) - Version 7 - Version 6 diff --git a/BUILD.md b/BUILD.md index deb30b37f4..049c86d3f5 100644 --- a/BUILD.md +++ b/BUILD.md @@ -57,7 +57,7 @@ cmake -DENABLE_DBENGINE ### Dependency detection -We have a mixture of soft- and hard-depedencies on libraries. For most of these we expect +We have a mixture of soft- and hard-dependencies on libraries. For most of these we expect `pkg-config` information, for some we manually probe for libraries and include files. We should treat all of the external dependencies consistently: @@ -346,10 +346,10 @@ We should follow these steps: 9. Deprecate / remove the autotools build-system completely (so that we can support a single build-system). -Some smaller miscellaeneous suggestions: +Some smaller miscellaneous suggestions: 1. Remove the `_Generic` / `strerror_r` config to make the system simpler (use the technique - on the blog post to make the standard version re-enterant so that it is thread-safe). + on the blog post to make the standard version re-entrant so that it is thread-safe). 2. Pull in jemalloc by source into the repo if it is our preferred malloc implementation. # Background diff --git a/CHANGELOG.md b/CHANGELOG.md index 0a183059dc..b8271a092d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -33,9 +33,9 @@ - Exclude autofs by default in diskspace plugin [\#10441](https://github.com/netdata/netdata/pull/10441) ([nabijaczleweli](https://github.com/nabijaczleweli)) - New eBPF kernel [\#10434](https://github.com/netdata/netdata/pull/10434) ([thiagoftsm](https://github.com/thiagoftsm)) - Update and improve the Netdata style guide [\#10433](https://github.com/netdata/netdata/pull/10433) ([joelhans](https://github.com/joelhans)) -- Change HDDtemp to report None instead of 0 [\#10429](https://github.com/netdata/netdata/pull/10429) ([slavox](https://github.com/slavox)) +- Change hddtemp to report None instead of 0 [\#10429](https://github.com/netdata/netdata/pull/10429) ([slavox](https://github.com/slavox)) - Use bash shell as user netdata for debug [\#10425](https://github.com/netdata/netdata/pull/10425) ([Steve8291](https://github.com/Steve8291)) -- Qick and dirty fix for \#10420 [\#10424](https://github.com/netdata/netdata/pull/10424) ([skibbipl](https://github.com/skibbipl)) +- Quick and dirty fix for \#10420 [\#10424](https://github.com/netdata/netdata/pull/10424) ([skibbipl](https://github.com/skibbipl)) - Add instructions on enabling explicitly disabled collectors [\#10418](https://github.com/netdata/netdata/pull/10418) ([joelhans](https://github.com/joelhans)) - Change links at bottom of all install docs [\#10416](https://github.com/netdata/netdata/pull/10416) ([joelhans](https://github.com/joelhans)) - Improve configuration docs with common changes and start/stop/restart directions [\#10415](https://github.com/netdata/netdata/pull/10415) ([joelhans](https://github.com/joelhans)) @@ -139,7 +139,7 @@ - add `nvidia\_smi` collector data to the dashboard\_info.js [\#10230](https://github.com/netdata/netdata/pull/10230) ([ilyam8](https://github.com/ilyam8)) - health: convert `elasticsearch\_last\_collected` alarm to template [\#10226](https://github.com/netdata/netdata/pull/10226) ([ilyam8](https://github.com/ilyam8)) - streaming: fix a typo in the README.md [\#10225](https://github.com/netdata/netdata/pull/10225) ([ilyam8](https://github.com/ilyam8)) -- collectors/xenstat.plugin: recieved =\> received [\#10224](https://github.com/netdata/netdata/pull/10224) ([ilyam8](https://github.com/ilyam8)) +- collectors/xenstat.plugin: received =\> received [\#10224](https://github.com/netdata/netdata/pull/10224) ([ilyam8](https://github.com/ilyam8)) - dashboard\_info.js: fix a typo \(vernemq\) [\#10223](https://github.com/netdata/netdata/pull/10223) ([ilyam8](https://github.com/ilyam8)) - Fix chart filtering [\#10218](https://github.com/netdata/netdata/pull/10218) ([vlvkobal](https://github.com/vlvkobal)) - Don't stop Prometheus remote write collector when data is not available for dimension formatting [\#10217](https://github.com/netdata/netdata/pull/10217) ([vlvkobal](https://github.com/vlvkobal)) @@ -226,7 +226,7 @@ - Fix memory mode none not dropping stale dimension data [\#9917](https://github.com/netdata/netdata/pull/9917) ([mfundul](https://github.com/mfundul)) - Fix memory mode none not marking dimensions as obsolete. [\#9912](https://github.com/netdata/netdata/pull/9912) ([mfundul](https://github.com/mfundul)) - Fix buffer overflow in rrdr structure [\#9903](https://github.com/netdata/netdata/pull/9903) ([mfundul](https://github.com/mfundul)) -- Fix missing newline concatentation slash causing rpm build to fail [\#9900](https://github.com/netdata/netdata/pull/9900) ([prologic](https://github.com/prologic)) +- Fix missing newline concatenation slash causing rpm build to fail [\#9900](https://github.com/netdata/netdata/pull/9900) ([prologic](https://github.com/prologic)) - installer: update go.d.plugin version to v0.22.0 [\#9898](https://github.com/netdata/netdata/pull/9898) ([ilyam8](https://github.com/ilyam8)) - Add v2 HTTP message with compression to ACLK [\#9895](https://github.com/netdata/netdata/pull/9895) ([underhood](https://github.com/underhood)) - Fix lock order reversal \(Coverity defect CID 361629\) [\#9888](https://github.com/netdata/netdata/pull/9888) ([mfundul](https://github.com/mfundul)) diff --git a/HISTORICAL_CHANGELOG.md b/HISTORICAL_CHANGELOG.md index 9698a20d81..16ef78616b 100644 --- a/HISTORICAL_CHANGELOG.md +++ b/HISTORICAL_CHANGELOG.md @@ -164,7 +164,7 @@ netdata (1.6.0) - 2017-03-20 1. number of sensors by state 2. number of events in SEL - 3. Temperatures CELCIUS + 3. Temperatures CELSIUS 4. Temperatures FAHRENHEIT 5. Voltages 6. Currents @@ -239,7 +239,7 @@ netdata (1.5.0) - 2017-01-22 Vladimir Kobal (@vlvkobal) has done a magnificent work porting netdata to FreeBSD and MacOS. - Everyhing works: cpu, memory, disks performance, disks space, + Everything works: cpu, memory, disks performance, disks space, network interfaces, interrupts, IPv4 metrics, IPv6 metrics processes, context switches, softnet, IPC queues, IPC semaphores, IPC shared memory, uptime, etc. Wow! @@ -382,7 +382,7 @@ netdata (1.4.0) - 2016-10-04 cgroups, hddtemp, sensors, - phpfm, + phpfpm, tc (QoS) In detail: @@ -483,7 +483,7 @@ netdata (1.3.0) - 2016-08-28 - hddtemp - mysql - nginx - - phpfm + - phpfpm - postfix - sensors - squid @@ -518,7 +518,7 @@ netdata (1.3.0) - 2016-08-28 - apps.plugin improvements: - can now run with command line argument 'without-files' - to prevent it from enumating all the open files/sockets/pipes + to prevent it from enumerating all the open files/sockets/pipes of all running processes. - apps.plugin now scales the collected values to match the @@ -575,7 +575,7 @@ netdata (1.2.0) - 2016-05-16 20% better performance for the core of netdata. - More efficient threads locking in key components - contributed to the overal efficiency. + contributed to the overall efficiency. - netdata now has a CENTRAL REGISTRY ! @@ -625,7 +625,7 @@ netdata (1.1.0) - 2016-04-20 - Data collection: apps.plugin: grouping of processes now support patterns - Data collection: apps.plugin: now it is faster, after the new features added - Data collection: better auto-detection of partitions for disk monitoring -- Data collection: better fireqos intergation for QoS monitoring +- Data collection: better fireqos integration for QoS monitoring - Data collection: squid monitoring now uses squidclient - Data collection: SNMP monitoring now supports 64bit counters - API: fixed issues in CSV output generation diff --git a/backends/TIMESCALE.md b/backends/TIMESCALE.md index 854c4112e8..05a3c3b470 100644 --- a/backends/TIMESCALE.md +++ b/backends/TIMESCALE.md @@ -27,7 +27,7 @@ TimescaleDB. Finally, another member of Netdata's community has built a project that quickly launches Netdata, TimescaleDB, and Grafana in easy-to-manage Docker containers. Rune Juhl Jacobsen's [project](https://github.com/runejuhl/grafana-timescaledb) uses a `Makefile` to create everything, which makes it -perferct for testing and experimentation. +perfect for testing and experimentation. ## Netdata↔TimescaleDB in action diff --git a/backends/opentsdb/README.md b/backends/opentsdb/README.md index b9d0b9873b..5ba7b12c58 100644 --- a/backends/opentsdb/README.md +++ b/backends/opentsdb/README.md @@ -21,7 +21,7 @@ change the `destination = localhost:4242` line accordingly. As of [v1.16.0](https://github.com/netdata/netdata/releases/tag/v1.16.0), Netdata can send metrics to OpenTSDB using TLS/SSL. Unfortunately, OpenTDSB does not support encrypted connections, so you will have to configure a reverse proxy -to enable HTTPS communication between Netdata and OpenTSBD. You can set up a reverse proxy with +to enable HTTPS communication between Netdata and OpenTSDB. You can set up a reverse proxy with [Nginx](/docs/Running-behind-nginx.md). After your proxy is configured, make the following changes to `netdata.conf`: diff --git a/build_external/README.md b/build_external/README.md index f52f55734a..6a1e30a574 100644 --- a/build_external/README.md +++ b/build_external/README.md @@ -12,7 +12,7 @@ decoupled. This allows: - Cross-compilation (e.g. linux development from macOS) - Cross-distro (e.g. using CentOS user-land while developing on Debian) - Multi-host scenarios (e.g. parent-child configurations) -- Bleeding-edge sceneraios (e.g. using the ACLK (**currently for internal-use only**)) +- Bleeding-edge scenarios (e.g. using the ACLK (**currently for internal-use only**)) The advantage of these scenarios is that they allow **reproducible** builds and testing for developers. This is the first iteration of the build-system to allow the team to use diff --git a/claim/README.md b/claim/README.md index a2e5116c13..ade6a221f2 100644 --- a/claim/README.md +++ b/claim/README.md @@ -304,7 +304,7 @@ This node no longer has access to the credentials it was claimed with and cannot You will still be able to see this node in your War Rooms in an **unreachable** state. If you want to reclaim this node into a different Space, you need to create a new identity by adding `-id=$(uuidgen)` to -the claiming script parameters. Make sure that you have the `uuidgen-runtime` packagen installed, as it is used to run the command `uuidgen`. For example, using the default claiming script: +the claiming script parameters. Make sure that you have the `uuidgen-runtime` package installed, as it is used to run the command `uuidgen`. For example, using the default claiming script: ```bash sudo netdata-claim.sh -token=TOKEN -rooms=ROOM1,ROOM2 -url=https://app.netdata.cloud -id=$(uuidgen) diff --git a/collectors/COLLECTORS.md b/collectors/COLLECTORS.md index 939594e0ce..e718fd239a 100644 --- a/collectors/COLLECTORS.md +++ b/collectors/COLLECTORS.md @@ -222,7 +222,7 @@ configure any of these collectors according to your setup and infrastructure. - [ISC DHCP (Go)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/isc_dhcpd): Reads a `dhcpd.leases` file and collects metrics on total active leases, pool active leases, and pool utilization. - [ISC DHCP (Python)](/collectors/python.d.plugin/isc_dhcpd/README.md): Reads `dhcpd.leases` file and reports DHCP - pools utiliation and leases statistics (total number, leases per pool). + pools utilization and leases statistics (total number, leases per pool). - [OpenLDAP](/collectors/python.d.plugin/openldap/README.md): Provides statistics information from the OpenLDAP (`slapd`) server. - [NSD](/collectors/python.d.plugin/nsd/README.md): Monitor nameserver performance metrics using the `nsd-control` @@ -357,7 +357,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v - [BCACHE](/collectors/proc.plugin/README.md): Monitor BCACHE statistics with the the `proc.plugin` collector. - [Block devices](/collectors/proc.plugin/README.md): Gather metrics about the health and performance of block devices using the the `proc.plugin` collector. -- [Btrfs](/collectors/proc.plugin/README.md): Montiors Btrfs filesystems with the the `proc.plugin` collector. +- [Btrfs](/collectors/proc.plugin/README.md): Monitors Btrfs filesystems with the the `proc.plugin` collector. - [Device mapper](/collectors/proc.plugin/README.md): Gather metrics about the Linux device mapper with the proc collector. - [Disk space](/collectors/diskspace.plugin/README.md): Collect disk space usage metrics on Linux mount points. @@ -445,7 +445,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v - [systemd](/collectors/cgroups.plugin/README.md): Monitor the CPU and memory usage of systemd services using the `cgroups.plugin` collector. - [systemd unit states](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/systemdunits): See the - state (active, inactive, activating, deactiviating, failed) of various systemd unit types. + state (active, inactive, activating, deactivating, failed) of various systemd unit types. - [System processes](/collectors/proc.plugin/README.md): Collect metrics on system load and total processes running using `/proc/loadavg` and the `proc.plugin` collector. - [Uptime](/collectors/proc.plugin/README.md): Monitor the uptime of a system using the `proc.plugin` collector. @@ -511,10 +511,10 @@ the `go.d.plugin`. ## Third-party collectors -These collectors are developed and maintined by third parties and, unlike the other collectors, are not installed by +These collectors are developed and maintained by third parties and, unlike the other collectors, are not installed by default. To use a third-party collector, visit their GitHub/documentation page and follow their installation procedures. -- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls Cyberpower UPS data using +- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls CyberPower UPS data using PowerPanel® Personal Linux. - [Logged-in users](https://github.com/veksh/netdata-numsessions): Collect the number of currently logged-on users. - [nim-netdata-plugin](https://github.com/FedericoCeratto/nim-netdata-plugin): A helper to create native Netdata diff --git a/collectors/README.md b/collectors/README.md index ef1f9610c1..a37a7e890a 100644 --- a/collectors/README.md +++ b/collectors/README.md @@ -32,7 +32,7 @@ guide](/collectors/QUICKSTART.md). [Monitor Nginx or Apache web server log files with Netdata](/docs/guides/collect-apache-nginx-web-logs.md) -[Monitor CockroadchDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md) +[Monitor CockroachDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md) [Monitor Unbound DNS servers with Netdata](/docs/guides/collect-unbound-metrics.md) @@ -40,7 +40,7 @@ guide](/collectors/QUICKSTART.md). ## Related features -**[Dashboards](/web/README.md)**: Vizualize your newly-collect metrics in real-time using Netdata's [built-in +**[Dashboards](/web/README.md)**: Visualize your newly-collect metrics in real-time using Netdata's [built-in dashboard](/web/gui/README.md). **[Backends](/backends/README.md)**: Extend our built-in [database engine](/database/engine/README.md), which supports diff --git a/collectors/REFERENCE.md b/collectors/REFERENCE.md index 08a405dc7b..9c6f0a61ed 100644 --- a/collectors/REFERENCE.md +++ b/collectors/REFERENCE.md @@ -46,7 +46,7 @@ However, there are cases that auto-detection fails. Usually, the reason is that allow Netdata to connect. In most of the cases, allowing the user `netdata` from `localhost` to connect and collect metrics, will automatically enable data collection for the application in question (it will require a Netdata restart). -View our [collectors quickstart](/collectors/QUICKSTART.md) for explict details on enabling and configuring collector modules. +View our [collectors quickstart](/collectors/QUICKSTART.md) for explicit details on enabling and configuring collector modules. ## Troubleshoot a collector @@ -112,7 +112,7 @@ This section features a list of Netdata's plugins, with a boolean setting to ena # charts.d = yes ``` -By default, most plugins are enabled, so you don't need to enable them explicity to use their collectors. To enable or +By default, most plugins are enabled, so you don't need to enable them explicitly to use their collectors. To enable or disable any specific plugin, remove the comment (`#`) and change the boolean setting to `yes` or `no`. All **external plugins** are managed by [plugins.d](plugins.d/), which provides additional management options. diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md index 5529226961..d10af1cdd3 100644 --- a/collectors/apps.plugin/README.md +++ b/collectors/apps.plugin/README.md @@ -59,7 +59,7 @@ Each of these sections provides the same number of charts: - Pipes open (`apps.pipes`) - Swap memory - Swap memory used (`apps.swap`) - - Major page faults (i.e. swap activiy, `apps.major_faults`) + - Major page faults (i.e. swap activity, `apps.major_faults`) - Network - Sockets open (`apps.sockets`) diff --git a/collectors/cgroups.plugin/README.md b/collectors/cgroups.plugin/README.md index 9b26deb2ce..21dbcae83f 100644 --- a/collectors/cgroups.plugin/README.md +++ b/collectors/cgroups.plugin/README.md @@ -145,7 +145,7 @@ Support per distribution: |Fedora 25|YES|[here](http://pastebin.com/ax0373wF)|| |Debian 8|NO||can be enabled, see below| |AMI|NO|[here](http://pastebin.com/FrxmptjL)|not a systemd system| -|Centos 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below| +|CentOS 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below| ### how to enable cgroup accounting on systemd systems that is by default disabled diff --git a/collectors/ebpf.plugin/README.md b/collectors/ebpf.plugin/README.md index 44e238e36d..5ea3b49514 100644 --- a/collectors/ebpf.plugin/README.md +++ b/collectors/ebpf.plugin/README.md @@ -221,7 +221,7 @@ The following options are available: - `ports`: Define the destination ports for Netdata to monitor. - `hostnames`: The list of hostnames that can be resolved to an IP address. - `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a - range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresess, but this + range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresses, but this can be changed with the `ips` setting. By default, Netdata displays up to 500 dimensions on network connection charts. If there are more possible dimensions, @@ -275,7 +275,7 @@ curl -sSL https://raw.githubusercontent.com/netdata/kernel-collector/master/tool If this script returns no output, your system is ready to compile and run the eBPF collector. -If you see a warning about a missing kerkel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL +If you see a warning about a missing kernel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL BPF_JIT`), you will need to recompile your kernel to support this configuration. The process of recompiling Linux kernels varies based on your distribution and version. Read the documentation for your system's distribution to learn more about the specific workflow for recompiling the kernel, ensuring that you set all the necessary diff --git a/collectors/freeipmi.plugin/README.md b/collectors/freeipmi.plugin/README.md index 64328fc9e7..52945e3c62 100644 --- a/collectors/freeipmi.plugin/README.md +++ b/collectors/freeipmi.plugin/README.md @@ -25,7 +25,7 @@ The plugin creates (up to) 8 charts, based on the information collected from IPM 1. number of sensors by state 2. number of events in SEL -3. Temperatures CELCIUS +3. Temperatures CELSIUS 4. Temperatures FAHRENHEIT 5. Voltages 6. Currents diff --git a/collectors/node.d.plugin/stiebeleltron/README.md b/collectors/node.d.plugin/stiebeleltron/README.md index 30f51169b3..59bbf703c4 100644 --- a/collectors/node.d.plugin/stiebeleltron/README.md +++ b/collectors/node.d.plugin/stiebeleltron/README.md @@ -40,7 +40,7 @@ The charts are configurable, however, the provided default configuration collect - Heat circuit 1 room temperature in C (set/actual) - Heat circuit 2 room temperature in C (set/actual) -5. **Eletric Reheating** +5. **Electric Reheating** - Dual Mode Reheating temperature in C (hot water/heating) @@ -68,7 +68,7 @@ If no configuration is given, the module will be disabled. Each `update_every` i Original author: BrainDoctor (github) -The module supports any metrics that are parseable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG. +The module supports any metrics that are parsable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG. ### Testing diff --git a/collectors/perf.plugin/README.md b/collectors/perf.plugin/README.md index d4bb41cb60..ccd185cedb 100644 --- a/collectors/perf.plugin/README.md +++ b/collectors/perf.plugin/README.md @@ -64,7 +64,7 @@ enable the perf plugin, edit /etc/netdata/netdata.conf and set: You can use the `command options` parameter to pick what data should be collected and which charts should be displayed. If `all` is used, all general performance monitoring counters are probed and corresponding charts are enabled for the available counters. You can also define a particular set of enabled charts using the -following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alighnment`, +following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alignment`, `emulation`, `L1D`, `L1D-prefetch`, `L1I`, `LL`, `DTLB`, `ITLB`, `PBU`. ## Debugging diff --git a/collectors/plugins.d/README.md b/collectors/plugins.d/README.md index 913ad9177c..c166e11e36 100644 --- a/collectors/plugins.d/README.md +++ b/collectors/plugins.d/README.md @@ -79,7 +79,7 @@ Example: ``` The setting `enable running new plugins` sets the default behavior for all external plugins. It can be -overriden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`. +overridden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`. The setting `check for new plugins every` sets the interval between scans of the directory `/usr/libexec/netdata/plugins.d`. New plugins can be added any time, and Netdata will detect them in a timely manner. diff --git a/collectors/python.d.plugin/am2320/README.md b/collectors/python.d.plugin/am2320/README.md index c17b33dfa1..14ddaa735d 100644 --- a/collectors/python.d.plugin/am2320/README.md +++ b/collectors/python.d.plugin/am2320/README.md @@ -6,7 +6,7 @@ sidebar_label: "AM2320" # AM2320 sensor monitoring with netdata -Displays a graph of the temperature and humity from a AM2320 sensor. +Displays a graph of the temperature and humidity from a AM2320 sensor. ## Requirements - Adafruit Circuit Python AM2320 library @@ -28,10 +28,10 @@ cd /etc/netdata # Replace this path with your Netdata config directory, if dif sudo ./edit-config python.d/am2320.conf ``` -Raspbery Pi Instructions: +Raspberry Pi Instructions: Hardware install: -Connect the am2320 to the Raspbery Pi I2C pins +Connect the am2320 to the Raspberry Pi I2C pins Raspberry Pi 3B/4 Pins: diff --git a/collectors/python.d.plugin/anomalies/README.md b/collectors/python.d.plugin/anomalies/README.md index 8346aa6693..1e27f3b5be 100644 --- a/collectors/python.d.plugin/anomalies/README.md +++ b/collectors/python.d.plugin/anomalies/README.md @@ -134,7 +134,7 @@ local: diffs_n: 1 # What is the typical proportion of anomalies in your data on average? - # This paramater can control the sensitivity of your models to anomalies. + # This parameter can control the sensitivity of your models to anomalies. # Some discussion here: https://github.com/yzhao062/pyod/issues/144 contamination: 0.001 @@ -142,7 +142,7 @@ local: # just the average of all anomaly probabilities at each time step include_average_prob: true - # Define any custom models you would like to create anomaly probabilties for, some examples below to show how. + # Define any custom models you would like to create anomaly probabilities for, some examples below to show how. # For example below example creates two custom models, one to run anomaly detection user and system cpu for our demo servers # and one on the cpu and mem apps metrics for the python.d.plugin. # custom_models: @@ -161,7 +161,7 @@ local: In the `anomalies.conf` file you can also define some "custom models" which you can use to group one or more metrics into a single model much like is done by default for the charts you specify. This is useful if you have a handful of metrics that exist in different charts but perhaps are related to the same underlying thing you would like to perform anomaly detection on, for example a specific app or user. -To define a custom model you would include configuation like below in `anomalies.conf`. By default there should already be some commented out examples in there. +To define a custom model you would include configuration like below in `anomalies.conf`. By default there should already be some commented out examples in there. `name` is a name you give your custom model, this is what will appear alongside any other specified charts in the `anomalies.probability` and `anomalies.anomaly` charts. `dimensions` is a string of metrics you want to include in your custom model. By default the [netdata-pandas](https://github.com/netdata/netdata-pandas) library used to pull the data from Netdata uses a "chart.a|dim.1" type of naming convention in the pandas columns it returns, hence the `dimensions` string should look like "chart.name|dimension.name,chart.name|dimension.name". The examples below hopefully make this clear. @@ -194,7 +194,7 @@ sudo su -s /bin/bash netdata /usr/libexec/netdata/plugins.d/python.d.plugin anomalies debug trace nolock ``` -## Deepdive turorial +## Deepdive tutorial If you would like to go deeper on what exactly the anomalies collector is doing under the hood then check out this [deepdive tutorial](https://github.com/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) in our community repo where you can play around with some data from our demo servers (or your own if its accessible to you) and work through the calculations step by step. @@ -206,7 +206,7 @@ If you would like to go deeper on what exactly the anomalies collector is doing - Python 3 is also required for the underlying ML libraries of [numba](https://pypi.org/project/numba/), [scikit-learn](https://pypi.org/project/scikit-learn/), and [PyOD](https://pypi.org/project/pyod/). - It may take a few hours or so (depending on your choice of `train_secs_n`) for the collector to 'settle' into it's typical behaviour in terms of the trained models and probabilities you will see in the normal running of your node. - As this collector does most of the work in Python itself, with [PyOD](https://pyod.readthedocs.io/en/latest/) leveraging [numba](https://numba.pydata.org/) under the hood, you may want to try it out first on a test or development system to get a sense of its performance characteristics on a node similar to where you would like to use it. -- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexibile in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)). +- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexible in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)). - You can see how long model training is taking by looking in the logs for the collector `grep 'anomalies' /var/log/netdata/error.log | grep 'training'` and you should see lines like `2020-12-01 22:02:14: python.d INFO: anomalies[local] : training complete in 2.81 seconds (runs_counter=2700, model=pca, train_n_secs=14400, models=26, n_fit_success=26, n_fit_fails=0, after=1606845731, before=1606860131).`. - This also gives counts of the number of models, if any, that failed to fit and so had to default back to the DefaultModel (which is currently [HBOS](https://pyod.readthedocs.io/en/latest/_modules/pyod/models/hbos.html)). - `after` and `before` here refer to the start and end of the training data used to train the models. @@ -215,8 +215,8 @@ If you would like to go deeper on what exactly the anomalies collector is doing - Typically ~3%-3.5% additional cpu usage from scoring, jumping to ~60% for a couple of seconds during model training. - About ~150mb of ram (`apps.mem`) being continually used by the `python.d.plugin`. - If you activate this collector on a fresh node, it might take a little while to build up enough data to calculate a realistic and useful model. -- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relativley high `update_every` maybe 10, 15 or 30 in `anomalies.conf`. -- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisey model if you set `train_n_secs` to be too small for how your node tends to behave. +- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relatively high `update_every` maybe 10, 15 or 30 in `anomalies.conf`. +- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisy model if you set `train_n_secs` to be too small for how your node tends to behave. ## Useful links and further reading diff --git a/collectors/python.d.plugin/dovecot/README.md b/collectors/python.d.plugin/dovecot/README.md index 55aeed3eb5..730b64257b 100644 --- a/collectors/python.d.plugin/dovecot/README.md +++ b/collectors/python.d.plugin/dovecot/README.md @@ -38,8 +38,8 @@ Module gives information with following charts: 5. **Context Switches** - - volountary - - involountary + - voluntary + - involuntary 6. **disk** in bytes/s diff --git a/collectors/python.d.plugin/go_expvar/README.md b/collectors/python.d.plugin/go_expvar/README.md index 66ebc0b67b..a73610e7a1 100644 --- a/collectors/python.d.plugin/go_expvar/README.md +++ b/collectors/python.d.plugin/go_expvar/README.md @@ -69,7 +69,7 @@ Sample output: ```json { "cmdline": ["./expvar-demo-binary"], -"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, } +"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, } } ``` diff --git a/collectors/python.d.plugin/mongodb/README.md b/collectors/python.d.plugin/mongodb/README.md index 5d5295aa46..c0df123d7a 100644 --- a/collectors/python.d.plugin/mongodb/README.md +++ b/collectors/python.d.plugin/mongodb/README.md @@ -80,7 +80,7 @@ Number of charts depends on mongodb version, storage engine and other features ( 13. **Cache metrics** (WiredTiger): - percentage of bytes currently in the cache (amount of space taken by cached data) - - percantage of tracked dirty bytes in the cache (amount of space taken by dirty data) + - percentage of tracked dirty bytes in the cache (amount of space taken by dirty data) 14. **Pages evicted from cache** (WiredTiger): diff --git a/collectors/python.d.plugin/mysql/README.md b/collectors/python.d.plugin/mysql/README.md index 5b9feadd54..d8d3c1d0b1 100644 --- a/collectors/python.d.plugin/mysql/README.md +++ b/collectors/python.d.plugin/mysql/README.md @@ -67,7 +67,7 @@ This module will produce following charts (if data is available): - immediate - waited -6. **Table Select Join Issuess** in joins/s +6. **Table Select Join Issues** in joins/s - full join - full range join @@ -75,7 +75,7 @@ This module will produce following charts (if data is available): - range check - scan -7. **Table Sort Issuess** in joins/s +7. **Table Sort Issues** in joins/s - merge passes - range @@ -164,7 +164,7 @@ This module will produce following charts (if data is available): - updated - deleted -24. **InnoDB Buffer Pool Pagess** in pages +24. **InnoDB Buffer Pool Pages** in pages - data - dirty diff --git a/collectors/python.d.plugin/postgres/README.md b/collectors/python.d.plugin/postgres/README.md index 67cc8fe323..3d573d6dcc 100644 --- a/collectors/python.d.plugin/postgres/README.md +++ b/collectors/python.d.plugin/postgres/README.md @@ -22,7 +22,7 @@ Following charts are drawn: - active -3. **Current Backend Processe Usage** percentage +3. **Current Backend Process Usage** percentage - used - available diff --git a/collectors/python.d.plugin/proxysql/README.md b/collectors/python.d.plugin/proxysql/README.md index 6f4ca69131..f1b369a446 100644 --- a/collectors/python.d.plugin/proxysql/README.md +++ b/collectors/python.d.plugin/proxysql/README.md @@ -31,7 +31,7 @@ It produces: - questions: total number of queries sent from frontends - slow_queries: number of queries that ran for longer than the threshold in milliseconds defined in global variable `mysql-long_query_time` -3. **Overall Bandwith (backends)** +3. **Overall Bandwidth (backends)** - in - out @@ -45,7 +45,7 @@ It produces: - `4=OFFLINE_HARD`: when a server is put into OFFLINE_HARD mode, the existing connections are dropped, while new incoming connections aren't accepted either. This is equivalent to deleting the server from a hostgroup, or temporarily taking it out of the hostgroup for maintenance work - `-1`: Unknown status -5. **Bandwith (backends)** +5. **Bandwidth (backends)** - Backends - in diff --git a/collectors/python.d.plugin/samba/README.md b/collectors/python.d.plugin/samba/README.md index 2c86e7b609..ed26d28718 100644 --- a/collectors/python.d.plugin/samba/README.md +++ b/collectors/python.d.plugin/samba/README.md @@ -21,7 +21,7 @@ It produces the following charts: 1. **Syscall R/Ws** in kilobytes/s - sendfile - - recvfle + - recvfile 2. **Smb2 R/Ws** in kilobytes/s diff --git a/collectors/python.d.plugin/springboot/README.md b/collectors/python.d.plugin/springboot/README.md index 46bc2d3568..f38e8bf05a 100644 --- a/collectors/python.d.plugin/springboot/README.md +++ b/collectors/python.d.plugin/springboot/README.md @@ -93,7 +93,7 @@ Please refer [Spring Boot Actuator: Production-ready Features](https://docs.spri - MarkSweep - ... -4. **Heap Mmeory Usage** in KB +4. **Heap Memory Usage** in KB - used - committed diff --git a/collectors/statsd.plugin/README.md b/collectors/statsd.plugin/README.md index d5bc0d1ad5..332b60e735 100644 --- a/collectors/statsd.plugin/README.md +++ b/collectors/statsd.plugin/README.md @@ -38,7 +38,7 @@ Netdata fully supports the statsd protocol. All statsd client libraries can be u `:value` can be omitted and statsd will assume it is `1`. `|c`, `|C` and `|m` can be omitted an statsd will assume it is `|m`. So, the application may send just `name` and statsd will parse it as `name:1|m`. - For counters use `|c` (esty/statsd compatible) or `|C` (brubeck compatible), for meters use `|m`. + For counters use `|c` (etsy/statsd compatible) or `|C` (brubeck compatible), for meters use `|m`. Sampling rate is supported (check below). @@ -290,7 +290,7 @@ dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS `pattern` is a keyword. When set, `METRIC` is expected to be a Netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions. -`TYPE`, `MUTLIPLIER`, `DIVIDER` and `OPTIONS` are optional. +`TYPE`, `MULTIPLIER`, `DIVIDER` and `OPTIONS` are optional. `TYPE` can be: diff --git a/collectors/tc.plugin/README.md b/collectors/tc.plugin/README.md index 70e31c236b..480076087e 100644 --- a/collectors/tc.plugin/README.md +++ b/collectors/tc.plugin/README.md @@ -172,7 +172,7 @@ And this is what you are going to get: ## QoS Configuration with tc -First, setup the tc rules in rc.local using commands to assign different DSCP markings to different classids. You can see one such example in [github issue #4563](https://github.com/netdata/netdata/issues/4563#issuecomment-455711973). +First, setup the tc rules in rc.local using commands to assign different QoS markings to different classids. You can see one such example in [github issue #4563](https://github.com/netdata/netdata/issues/4563#issuecomment-455711973). Then, map the classids to names by creating `/etc/iproute2/tc_cls`. For example: diff --git a/daemon/README.md b/daemon/README.md index 9aa483b711..ec1f1c7ccc 100644 --- a/daemon/README.md +++ b/daemon/README.md @@ -514,7 +514,7 @@ section(s) you need to trace. We have made the most to make Netdata crash free. If however, Netdata crashes on your system, it would be very helpful to provide stack traces of the crash. Without them, is will be almost impossible to find the issue (the code base is -quite large to find such an issue by just objerving it). +quite large to find such an issue by just observing it). To provide stack traces, **you need to have Netdata compiled with debugging**. There is no need to enable any tracing (`debug flags`). diff --git a/daemon/config/README.md b/daemon/config/README.md index 71c0c0e841..a1e2b04b5c 100644 --- a/daemon/config/README.md +++ b/daemon/config/README.md @@ -100,7 +100,7 @@ Additionally, there will be the following options: |:-----:|:-----:|:---| | PATH environment variable|`auto-detected`|| | PYTHONPATH environment variable||Used to set a custom python path| -| enable running new plugins|`yes`|When set to `yes`, Netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configirued in this file with a `yes`| +| enable running new plugins|`yes`|When set to `yes`, Netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configured in this file with a `yes`| | check for new plugins every|60|The time in seconds to check for new plugins in the plugins directory. This allows having other applications dynamically creating plugins for Netdata.| | checks|`no`|This is a debugging plugin for the internal latency| @@ -190,7 +190,7 @@ that is information about lines that begin with `dim`, which affect a chart's di You may notice some settings that begin with `dim` beneath the ones defined in the table above. These settings determine which dimensions appear on the given chart and how Netdata calculates them. -Each dimension setting has the following structure: `dim [DIMENSION ID] [OPTION] = [VALUE]`. The available options are `name`, `algorithm`, `multipler`, and `divisor`. +Each dimension setting has the following structure: `dim [DIMENSION ID] [OPTION] = [VALUE]`. The available options are `name`, `algorithm`, `multiplier`, and `divisor`. | Setting | Function | | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | diff --git a/docs/Running-behind-apache.md b/docs/Running-behind-apache.md index 8a547e7b4f..8810dc8fc5 100644 --- a/docs/Running-behind-apache.md +++ b/docs/Running-behind-apache.md @@ -365,7 +365,7 @@ apache logs accesses and Netdata logs them too. You can prevent Netdata from gen ## Troubleshooting mod_proxy -Make sure the requests reach Netdata, by examing `/var/log/netdata/access.log`. +Make sure the requests reach Netdata, by examining `/var/log/netdata/access.log`. 1. if the requests do not reach Netdata, your apache does not forward them. 2. if the requests reach Netdata but the URLs are wrong, you have not re-written them properly. diff --git a/docs/collect/container-metrics.md b/docs/collect/container-metrics.md index e8d7516440..b5bb9da01c 100644 --- a/docs/collect/container-metrics.md +++ b/docs/collect/container-metrics.md @@ -65,7 +65,7 @@ collection capabilities. ## Collect Kubernetes metrics We already have a few complementary tools and collectors for monitoring the many layers of a Kubernetes cluster, -_entirely for free_. These methods work together to help you troubleshoot performance or availablility issues across +_entirely for free_. These methods work together to help you troubleshoot performance or availability issues across your k8s infrastructure. - A [Helm chart](https://github.com/netdata/helmchart), which bootstraps a Netdata Agent pod on every node in your diff --git a/docs/collect/enable-configure.md b/docs/collect/enable-configure.md index 16e3d8f942..33d7a7bb4f 100644 --- a/docs/collect/enable-configure.md +++ b/docs/collect/enable-configure.md @@ -18,10 +18,10 @@ enable or configure a collector to gather all available metrics from your system ## Enable a collector or its orchestrator You can enable/disable collectors individually, or enable/disable entire orchestrators, using their configuration files. -For example, you can change the behavior of the Go orchestator, or any of its collectors, by editing `go.d.conf`. +For example, you can change the behavior of the Go orchestrator, or any of its collectors, by editing `go.d.conf`. Use `edit-config` from your [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory) to open -the orchestrator's primary configuration file: +the orchestrator primary configuration file: ```bash cd /etc/netdata @@ -29,7 +29,7 @@ sudo ./edit-config go.d.conf ``` Within this file, you can either disable the orchestrator entirely (`enabled: yes`), or find a specific collector and -enable/disable it with `yes` and `no` settings. Uncomment any line you change to ensure the Netdata deamon reads it on +enable/disable it with `yes` and `no` settings. Uncomment any line you change to ensure the Netdata daemon reads it on start. After you make your changes, restart the Agent with `service netdata restart`. diff --git a/docs/collect/how-collectors-work.md b/docs/collect/how-collectors-work.md index 6818e79427..5ae444a6f2 100644 --- a/docs/collect/how-collectors-work.md +++ b/docs/collect/how-collectors-work.md @@ -55,7 +55,7 @@ terms related to collecting metrics. - **Modules** are a type of collector. - **Orchestrators** are external plugins that run and manage one or more modules. They run as independent processes. - The Go orchestator is in active development. + The Go orchestrator is in active development. - [go.d.plugin](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/): An orchestrator for data collection modules written in `go`. - [python.d.plugin](/collectors/python.d.plugin/README.md): An orchestrator for data collection modules written in diff --git a/docs/export/external-databases.md b/docs/export/external-databases.md index 3b7753903b..309b03a878 100644 -