summaryrefslogtreecommitdiffstats
path: root/collectors
diff options
context:
space:
mode:
authorJosh Soref <jsoref@users.noreply.github.com>2021-01-18 07:43:43 -0500
committerGitHub <noreply@github.com>2021-01-18 07:43:43 -0500
commitf4193c3b5c013df00b6d05805bf1cc99bebe02bf (patch)
tree867ff1394b76be49b7a4b31f47bcf43f5ad3aea4 /collectors
parent586945c2b78f5dc4680972d92c2d5e9d0bb9a617 (diff)
Spelling md (#10508)
* spelling: activity Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: adding Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: addresses Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: administrators Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: alarm Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: alignment Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: analyzing Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: apcupsd Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: apply Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: around Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: associated Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: automatically Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: availability Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: background Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: bandwidth Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: berkeley Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: between Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: celsius Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: centos Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: certificate Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: cockroach Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: collectors Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: concatenation Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: configuration Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: configured Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: continuous Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: correctly Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: corresponding Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: cyberpower Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: daemon Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: dashboard Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: database Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: deactivating Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: dependencies Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: deployment Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: determine Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: downloading Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: either Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: electric Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: entity Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: entrant Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: enumerating Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: environment Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: equivalent Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: etsy Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: everything Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: examining Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: expectations Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: explicit Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: explicitly Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: finally Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: flexible Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: further Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: hddtemp Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: humidity Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: identify Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: importance Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: incoming Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: individual Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: initiate Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: installation Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: integration Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: integrity Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: involuntary Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: issues Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: kernel Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: language Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: libwebsockets Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: lighttpd Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: maintained Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: meaningful Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: memory Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: metrics Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: miscellaneous Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: monitoring Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: monitors Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: monolithic Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: multi Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: multiplier Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: navigation Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: noisy Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: number Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: observing Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: omitted Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: orchestrator Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: overall Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: overridden Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: package Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: packages Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: packet Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: pages Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: parameter Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: parsable Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: percentage Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: perfect Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: phpfpm Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: platform Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: preferred Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: prioritize Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: probabilities Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: process Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: processes Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: program Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: qos Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: quick Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: raspberry Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: received Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: recvfile Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: red hat Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: relatively Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: reliability Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: repository Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: requested Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: requests Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: retrieved Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: scenarios Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: see all Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: supported Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: supports Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: temporary Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: tsdb Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: tutorial Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: updates Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: utilization Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: value Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: variables Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: visualize Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: voluntary Signed-off-by: Josh Soref <jsoref@users.noreply.github.com> * spelling: your Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
Diffstat (limited to 'collectors')
-rw-r--r--collectors/COLLECTORS.md10
-rw-r--r--collectors/README.md4
-rw-r--r--collectors/REFERENCE.md4
-rw-r--r--collectors/apps.plugin/README.md2
-rw-r--r--collectors/cgroups.plugin/README.md2
-rw-r--r--collectors/ebpf.plugin/README.md4
-rw-r--r--collectors/freeipmi.plugin/README.md2
-rw-r--r--collectors/node.d.plugin/stiebeleltron/README.md4
-rw-r--r--collectors/perf.plugin/README.md2
-rw-r--r--collectors/plugins.d/README.md2
-rw-r--r--collectors/python.d.plugin/am2320/README.md6
-rw-r--r--collectors/python.d.plugin/anomalies/README.md14
-rw-r--r--collectors/python.d.plugin/dovecot/README.md4
-rw-r--r--collectors/python.d.plugin/go_expvar/README.md2
-rw-r--r--collectors/python.d.plugin/mongodb/README.md2
-rw-r--r--collectors/python.d.plugin/mysql/README.md6
-rw-r--r--collectors/python.d.plugin/postgres/README.md2
-rw-r--r--collectors/python.d.plugin/proxysql/README.md4
-rw-r--r--collectors/python.d.plugin/samba/README.md2
-rw-r--r--collectors/python.d.plugin/springboot/README.md2
-rw-r--r--collectors/statsd.plugin/README.md4
-rw-r--r--collectors/tc.plugin/README.md2
22 files changed, 43 insertions, 43 deletions
diff --git a/collectors/COLLECTORS.md b/collectors/COLLECTORS.md
index 939594e0ce..e718fd239a 100644
--- a/collectors/COLLECTORS.md
+++ b/collectors/COLLECTORS.md
@@ -222,7 +222,7 @@ configure any of these collectors according to your setup and infrastructure.
- [ISC DHCP (Go)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/isc_dhcpd): Reads a
`dhcpd.leases` file and collects metrics on total active leases, pool active leases, and pool utilization.
- [ISC DHCP (Python)](/collectors/python.d.plugin/isc_dhcpd/README.md): Reads `dhcpd.leases` file and reports DHCP
- pools utiliation and leases statistics (total number, leases per pool).
+ pools utilization and leases statistics (total number, leases per pool).
- [OpenLDAP](/collectors/python.d.plugin/openldap/README.md): Provides statistics information from the OpenLDAP
(`slapd`) server.
- [NSD](/collectors/python.d.plugin/nsd/README.md): Monitor nameserver performance metrics using the `nsd-control`
@@ -357,7 +357,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v
- [BCACHE](/collectors/proc.plugin/README.md): Monitor BCACHE statistics with the the `proc.plugin` collector.
- [Block devices](/collectors/proc.plugin/README.md): Gather metrics about the health and performance of block
devices using the the `proc.plugin` collector.
-- [Btrfs](/collectors/proc.plugin/README.md): Montiors Btrfs filesystems with the the `proc.plugin` collector.
+- [Btrfs](/collectors/proc.plugin/README.md): Monitors Btrfs filesystems with the the `proc.plugin` collector.
- [Device mapper](/collectors/proc.plugin/README.md): Gather metrics about the Linux device mapper with the proc
collector.
- [Disk space](/collectors/diskspace.plugin/README.md): Collect disk space usage metrics on Linux mount points.
@@ -445,7 +445,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v
- [systemd](/collectors/cgroups.plugin/README.md): Monitor the CPU and memory usage of systemd services using the
`cgroups.plugin` collector.
- [systemd unit states](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/systemdunits): See the
- state (active, inactive, activating, deactiviating, failed) of various systemd unit types.
+ state (active, inactive, activating, deactivating, failed) of various systemd unit types.
- [System processes](/collectors/proc.plugin/README.md): Collect metrics on system load and total processes running
using `/proc/loadavg` and the `proc.plugin` collector.
- [Uptime](/collectors/proc.plugin/README.md): Monitor the uptime of a system using the `proc.plugin` collector.
@@ -511,10 +511,10 @@ the `go.d.plugin`.
## Third-party collectors
-These collectors are developed and maintined by third parties and, unlike the other collectors, are not installed by
+These collectors are developed and maintained by third parties and, unlike the other collectors, are not installed by
default. To use a third-party collector, visit their GitHub/documentation page and follow their installation procedures.
-- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls Cyberpower UPS data using
+- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls CyberPower UPS data using
PowerPanel® Personal Linux.
- [Logged-in users](https://github.com/veksh/netdata-numsessions): Collect the number of currently logged-on users.
- [nim-netdata-plugin](https://github.com/FedericoCeratto/nim-netdata-plugin): A helper to create native Netdata
diff --git a/collectors/README.md b/collectors/README.md
index ef1f9610c1..a37a7e890a 100644
--- a/collectors/README.md
+++ b/collectors/README.md
@@ -32,7 +32,7 @@ guide](/collectors/QUICKSTART.md).
[Monitor Nginx or Apache web server log files with Netdata](/docs/guides/collect-apache-nginx-web-logs.md)
-[Monitor CockroadchDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md)
+[Monitor CockroachDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md)
[Monitor Unbound DNS servers with Netdata](/docs/guides/collect-unbound-metrics.md)
@@ -40,7 +40,7 @@ guide](/collectors/QUICKSTART.md).
## Related features
-**[Dashboards](/web/README.md)**: Vizualize your newly-collect metrics in real-time using Netdata's [built-in
+**[Dashboards](/web/README.md)**: Visualize your newly-collect metrics in real-time using Netdata's [built-in
dashboard](/web/gui/README.md).
**[Backends](/backends/README.md)**: Extend our built-in [database engine](/database/engine/README.md), which supports
diff --git a/collectors/REFERENCE.md b/collectors/REFERENCE.md
index 08a405dc7b..9c6f0a61ed 100644
--- a/collectors/REFERENCE.md
+++ b/collectors/REFERENCE.md
@@ -46,7 +46,7 @@ However, there are cases that auto-detection fails. Usually, the reason is that
allow Netdata to connect. In most of the cases, allowing the user `netdata` from `localhost` to connect and collect
metrics, will automatically enable data collection for the application in question (it will require a Netdata restart).
-View our [collectors quickstart](/collectors/QUICKSTART.md) for explict details on enabling and configuring collector modules.
+View our [collectors quickstart](/collectors/QUICKSTART.md) for explicit details on enabling and configuring collector modules.
## Troubleshoot a collector
@@ -112,7 +112,7 @@ This section features a list of Netdata's plugins, with a boolean setting to ena
# charts.d = yes
```
-By default, most plugins are enabled, so you don't need to enable them explicity to use their collectors. To enable or
+By default, most plugins are enabled, so you don't need to enable them explicitly to use their collectors. To enable or
disable any specific plugin, remove the comment (`#`) and change the boolean setting to `yes` or `no`.
All **external plugins** are managed by [plugins.d](plugins.d/), which provides additional management options.
diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md
index 5529226961..d10af1cdd3 100644
--- a/collectors/apps.plugin/README.md
+++ b/collectors/apps.plugin/README.md
@@ -59,7 +59,7 @@ Each of these sections provides the same number of charts:
- Pipes open (`apps.pipes`)
- Swap memory
- Swap memory used (`apps.swap`)
- - Major page faults (i.e. swap activiy, `apps.major_faults`)
+ - Major page faults (i.e. swap activity, `apps.major_faults`)
- Network
- Sockets open (`apps.sockets`)
diff --git a/collectors/cgroups.plugin/README.md b/collectors/cgroups.plugin/README.md
index 9b26deb2ce..21dbcae83f 100644
--- a/collectors/cgroups.plugin/README.md
+++ b/collectors/cgroups.plugin/README.md
@@ -145,7 +145,7 @@ Support per distribution:
|Fedora 25|YES|[here](http://pastebin.com/ax0373wF)||
|Debian 8|NO||can be enabled, see below|
|AMI|NO|[here](http://pastebin.com/FrxmptjL)|not a systemd system|
-|Centos 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below|
+|CentOS 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below|
### how to enable cgroup accounting on systemd systems that is by default disabled
diff --git a/collectors/ebpf.plugin/README.md b/collectors/ebpf.plugin/README.md
index 44e238e36d..5ea3b49514 100644
--- a/collectors/ebpf.plugin/README.md
+++ b/collectors/ebpf.plugin/README.md
@@ -221,7 +221,7 @@ The following options are available:
- `ports`: Define the destination ports for Netdata to monitor.
- `hostnames`: The list of hostnames that can be resolved to an IP address.
- `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
- range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresess, but this
+ range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresses, but this
can be changed with the `ips` setting.
By default, Netdata displays up to 500 dimensions on network connection charts. If there are more possible dimensions,
@@ -275,7 +275,7 @@ curl -sSL https://raw.githubusercontent.com/netdata/kernel-collector/master/tool
If this script returns no output, your system is ready to compile and run the eBPF collector.
-If you see a warning about a missing kerkel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL
+If you see a warning about a missing kernel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL
BPF_JIT`), you will need to recompile your kernel to support this configuration. The process of recompiling Linux
kernels varies based on your distribution and version. Read the documentation for your system's distribution to learn
more about the specific workflow for recompiling the kernel, ensuring that you set all the necessary
diff --git a/collectors/freeipmi.plugin/README.md b/collectors/freeipmi.plugin/README.md
index 64328fc9e7..52945e3c62 100644
--- a/collectors/freeipmi.plugin/README.md
+++ b/collectors/freeipmi.plugin/README.md
@@ -25,7 +25,7 @@ The plugin creates (up to) 8 charts, based on the information collected from IPM
1. number of sensors by state
2. number of events in SEL
-3. Temperatures CELCIUS
+3. Temperatures CELSIUS
4. Temperatures FAHRENHEIT
5. Voltages
6. Currents
diff --git a/collectors/node.d.plugin/stiebeleltron/README.md b/collectors/node.d.plugin/stiebeleltron/README.md
index 30f51169b3..59bbf703c4 100644
--- a/collectors/node.d.plugin/stiebeleltron/README.md
+++ b/collectors/node.d.plugin/stiebeleltron/README.md
@@ -40,7 +40,7 @@ The charts are configurable, however, the provided default configuration collect
- Heat circuit 1 room temperature in C (set/actual)
- Heat circuit 2 room temperature in C (set/actual)
-5. **Eletric Reheating**
+5. **Electric Reheating**
- Dual Mode Reheating temperature in C (hot water/heating)
@@ -68,7 +68,7 @@ If no configuration is given, the module will be disabled. Each `update_every` i
Original author: BrainDoctor (github)
-The module supports any metrics that are parseable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG.
+The module supports any metrics that are parsable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG.
### Testing
diff --git a/collectors/perf.plugin/README.md b/collectors/perf.plugin/README.md
index d4bb41cb60..ccd185cedb 100644
--- a/collectors/perf.plugin/README.md
+++ b/collectors/perf.plugin/README.md
@@ -64,7 +64,7 @@ enable the perf plugin, edit /etc/netdata/netdata.conf and set:
You can use the `command options` parameter to pick what data should be collected and which charts should be
displayed. If `all` is used, all general performance monitoring counters are probed and corresponding charts
are enabled for the available counters. You can also define a particular set of enabled charts using the
-following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alighnment`,
+following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alignment`,
`emulation`, `L1D`, `L1D-prefetch`, `L1I`, `LL`, `DTLB`, `ITLB`, `PBU`.
## Debugging
diff --git a/collectors/plugins.d/README.md b/collectors/plugins.d/README.md
index 913ad9177c..c166e11e36 100644
--- a/collectors/plugins.d/README.md
+++ b/collectors/plugins.d/README.md
@@ -79,7 +79,7 @@ Example:
```
The setting `enable running new plugins` sets the default behavior for all external plugins. It can be
-overriden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`.
+overridden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`.
The setting `check for new plugins every` sets the interval between scans of the directory
`/usr/libexec/netdata/plugins.d`. New plugins can be added any time, and Netdata will detect them in a timely manner.
diff --git a/collectors/python.d.plugin/am2320/README.md b/collectors/python.d.plugin/am2320/README.md
index c17b33dfa1..14ddaa735d 100644
--- a/collectors/python.d.plugin/am2320/README.md
+++ b/collectors/python.d.plugin/am2320/README.md
@@ -6,7 +6,7 @@ sidebar_label: "AM2320"
# AM2320 sensor monitoring with netdata
-Displays a graph of the temperature and humity from a AM2320 sensor.
+Displays a graph of the temperature and humidity from a AM2320 sensor.
## Requirements
- Adafruit Circuit Python AM2320 library
@@ -28,10 +28,10 @@ cd /etc/netdata # Replace this path with your Netdata config directory, if dif
sudo ./edit-config python.d/am2320.conf
```
-Raspbery Pi Instructions:
+Raspberry Pi Instructions:
Hardware install:
-Connect the am2320 to the Raspbery Pi I2C pins
+Connect the am2320 to the Raspberry Pi I2C pins
Raspberry Pi 3B/4 Pins:
diff --git a/collectors/python.d.plugin/anomalies/README.md b/collectors/python.d.plugin/anomalies/README.md
index 8346aa6693..1e27f3b5be 100644
--- a/collectors/python.d.plugin/anomalies/README.md
+++ b/collectors/python.d.plugin/anomalies/README.md
@@ -134,7 +134,7 @@ local:
diffs_n: 1
# What is the typical proportion of anomalies in your data on average?
- # This paramater can control the sensitivity of your models to anomalies.
+ # This parameter can control the sensitivity of your models to anomalies.
# Some discussion here: https://github.com/yzhao062/pyod/issues/144
contamination: 0.001
@@ -142,7 +142,7 @@ local:
# just the average of all anomaly probabilities at each time step
include_average_prob: true
- # Define any custom models you would like to create anomaly probabilties for, some examples below to show how.
+ # Define any custom models you would like to create anomaly probabilities for, some examples below to show how.
# For example below example creates two custom models, one to run anomaly detection user and system cpu for our demo servers
# and one on the cpu and mem apps metrics for the python.d.plugin.
# custom_models:
@@ -161,7 +161,7 @@ local:
In the `anomalies.conf` file you can also define some "custom models" which you can use to group one or more metrics into a single model much like is done by default for the charts you specify. This is useful if you have a handful of metrics that exist in different charts but perhaps are related to the same underlying thing you would like to perform anomaly detection on, for example a specific app or user.
-To define a custom model you would include configuation like below in `anomalies.conf`. By default there should already be some commented out examples in there.
+To define a custom model you would include configuration like below in `anomalies.conf`. By default there should already be some commented out examples in there.
`name` is a name you give your custom model, this is what will appear alongside any other specified charts in the `anomalies.probability` and `anomalies.anomaly` charts. `dimensions` is a string of metrics you want to include in your custom model. By default the [netdata-pandas](https://github.com/netdata/netdata-pandas) library used to pull the data from Netdata uses a "chart.a|dim.1" type of naming convention in the pandas columns it returns, hence the `dimensions` string should look like "chart.name|dimension.name,chart.name|dimension.name". The examples below hopefully make this clear.
@@ -194,7 +194,7 @@ sudo su -s /bin/bash netdata
/usr/libexec/netdata/plugins.d/python.d.plugin anomalies debug trace nolock
```
-## Deepdive turorial
+## Deepdive tutorial
If you would like to go deeper on what exactly the anomalies collector is doing under the hood then check out this [deepdive tutorial](https://github.com/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) in our community repo where you can play around with some data from our demo servers (or your own if its accessible to you) and work through the calculations step by step.
@@ -206,7 +206,7 @@ If you would like to go deeper on what exactly the anomalies collector is doing
- Python 3 is also required for the underlying ML libraries of [numba](https://pypi.org/project/numba/), [scikit-learn](https://pypi.org/project/scikit-learn/), and [PyOD](https://pypi.org/project/pyod/).
- It may take a few hours or so (depending on your choice of `train_secs_n`) for the collector to 'settle' into it's typical behaviour in terms of the trained models and probabilities you will see in the normal running of your node.
- As this collector does most of the work in Python itself, with [PyOD](https://pyod.readthedocs.io/en/latest/) leveraging [numba](https://numba.pydata.org/) under the hood, you may want to try it out first on a test or development system to get a sense of its performance characteristics on a node similar to where you would like to use it.
-- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexibile in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)).
+- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexible in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)).
- You can see how long model training is taking by looking in the logs for the collector `grep 'anomalies' /var/log/netdata/error.log | grep 'training'` and you should see lines like `2020-12-01 22:02:14: python.d INFO: anomalies[local] : training complete in 2.81 seconds (runs_counter=2700, model=pca, train_n_secs=14400, models=26, n_fit_success=26, n_fit_fails=0, after=1606845731, before=1606860131).`.
- This also gives counts of the number of models, if any, that failed to fit and so had to default back to the DefaultModel (which is currently [HBOS](https://pyod.readthedocs.io/en/latest/_modules/pyod/models/hbos.html)).
- `after` and `before` here refer to the start and end of the training data used to train the models.
@@ -215,8 +215,8 @@ If you would like to go deeper on what exactly the anomalies collector is doing
- Typically ~3%-3.5% additional cpu usage from scoring, jumping to ~60% for a couple of seconds during model training.
- About ~150mb of ram (`apps.mem`) being continually used by the `python.d.plugin`.
- If you activate this collector on a fresh node, it might take a little while to build up enough data to calculate a realistic and useful model.
-- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relativley high `update_every` maybe 10, 15 or 30 in `anomalies.conf`.
-- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisey model if you set `train_n_secs` to be too small for how your node tends to behave.
+- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relatively high `update_every` maybe 10, 15 or 30 in `anomalies.conf`.
+- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisy model if you set `train_n_secs` to be too small for how your node tends to behave.
## Useful links and further reading
diff --git a/collectors/python.d.plugin/dovecot/README.md b/collectors/python.d.plugin/dovecot/README.md
index 55aeed3eb5..730b64257b 100644
--- a/collectors/python.d.plugin/dovecot/README.md
+++ b/collectors/python.d.plugin/dovecot/README.md
@@ -38,8 +38,8 @@ Module gives information with following charts:
5. **Context Switches**
- - volountary
- - involountary
+ - voluntary
+ - involuntary
6. **disk** in bytes/s
diff --git a/collectors/python.d.plugin/go_expvar/README.md b/collectors/python.d.plugin/go_expvar/README.md
index 66ebc0b67b..a73610e7a1 100644
--- a/collectors/python.d.plugin/go_expvar/README.md
+++ b/collectors/python.d.plugin/go_expvar/README.md
@@ -69,7 +69,7 @@ Sample output:
```json
{
"cmdline": ["./expvar-demo-binary"],
-"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, <ommited for brevity>}
+"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, <omitted for brevity>}
}
```
diff --git a/collectors/python.d.plugin/mongodb/README.md b/collectors/python.d.plugin/mongodb/README.md
index 5d5295aa46..c0df123d7a 100644
--- a/collectors/python.d.plugin/mongodb/README.md
+++ b/collectors/python.d.plugin/mongodb/README.md
@@ -80,7 +80,7 @@ Number of charts depends on mongodb version, storage engine and other features (
13. **Cache metrics** (WiredTiger):
- percentage of bytes currently in the cache (amount of space taken by cached data)
- - percantage of tracked dirty bytes in the cache (amount of space taken