summaryrefslogtreecommitdiffstats
path: root/collectors
diff options
context:
space:
mode:
Diffstat (limited to 'collectors')
-rw-r--r--collectors/COLLECTORS.md10
-rw-r--r--collectors/README.md4
-rw-r--r--collectors/REFERENCE.md4
-rw-r--r--collectors/apps.plugin/README.md2
-rw-r--r--collectors/cgroups.plugin/README.md2
-rw-r--r--collectors/ebpf.plugin/README.md4
-rw-r--r--collectors/freeipmi.plugin/README.md2
-rw-r--r--collectors/node.d.plugin/stiebeleltron/README.md4
-rw-r--r--collectors/perf.plugin/README.md2
-rw-r--r--collectors/plugins.d/README.md2
-rw-r--r--collectors/python.d.plugin/am2320/README.md6
-rw-r--r--collectors/python.d.plugin/anomalies/README.md14
-rw-r--r--collectors/python.d.plugin/dovecot/README.md4
-rw-r--r--collectors/python.d.plugin/go_expvar/README.md2
-rw-r--r--collectors/python.d.plugin/mongodb/README.md2
-rw-r--r--collectors/python.d.plugin/mysql/README.md6
-rw-r--r--collectors/python.d.plugin/postgres/README.md2
-rw-r--r--collectors/python.d.plugin/proxysql/README.md4
-rw-r--r--collectors/python.d.plugin/samba/README.md2
-rw-r--r--collectors/python.d.plugin/springboot/README.md2
-rw-r--r--collectors/statsd.plugin/README.md4
-rw-r--r--collectors/tc.plugin/README.md2
22 files changed, 43 insertions, 43 deletions
diff --git a/collectors/COLLECTORS.md b/collectors/COLLECTORS.md
index 939594e0ce..e718fd239a 100644
--- a/collectors/COLLECTORS.md
+++ b/collectors/COLLECTORS.md
@@ -222,7 +222,7 @@ configure any of these collectors according to your setup and infrastructure.
- [ISC DHCP (Go)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/isc_dhcpd): Reads a
`dhcpd.leases` file and collects metrics on total active leases, pool active leases, and pool utilization.
- [ISC DHCP (Python)](/collectors/python.d.plugin/isc_dhcpd/README.md): Reads `dhcpd.leases` file and reports DHCP
- pools utiliation and leases statistics (total number, leases per pool).
+ pools utilization and leases statistics (total number, leases per pool).
- [OpenLDAP](/collectors/python.d.plugin/openldap/README.md): Provides statistics information from the OpenLDAP
(`slapd`) server.
- [NSD](/collectors/python.d.plugin/nsd/README.md): Monitor nameserver performance metrics using the `nsd-control`
@@ -357,7 +357,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v
- [BCACHE](/collectors/proc.plugin/README.md): Monitor BCACHE statistics with the the `proc.plugin` collector.
- [Block devices](/collectors/proc.plugin/README.md): Gather metrics about the health and performance of block
devices using the the `proc.plugin` collector.
-- [Btrfs](/collectors/proc.plugin/README.md): Montiors Btrfs filesystems with the the `proc.plugin` collector.
+- [Btrfs](/collectors/proc.plugin/README.md): Monitors Btrfs filesystems with the the `proc.plugin` collector.
- [Device mapper](/collectors/proc.plugin/README.md): Gather metrics about the Linux device mapper with the proc
collector.
- [Disk space](/collectors/diskspace.plugin/README.md): Collect disk space usage metrics on Linux mount points.
@@ -445,7 +445,7 @@ The Netdata Agent can collect these system- and hardware-level metrics using a v
- [systemd](/collectors/cgroups.plugin/README.md): Monitor the CPU and memory usage of systemd services using the
`cgroups.plugin` collector.
- [systemd unit states](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/systemdunits): See the
- state (active, inactive, activating, deactiviating, failed) of various systemd unit types.
+ state (active, inactive, activating, deactivating, failed) of various systemd unit types.
- [System processes](/collectors/proc.plugin/README.md): Collect metrics on system load and total processes running
using `/proc/loadavg` and the `proc.plugin` collector.
- [Uptime](/collectors/proc.plugin/README.md): Monitor the uptime of a system using the `proc.plugin` collector.
@@ -511,10 +511,10 @@ the `go.d.plugin`.
## Third-party collectors
-These collectors are developed and maintined by third parties and, unlike the other collectors, are not installed by
+These collectors are developed and maintained by third parties and, unlike the other collectors, are not installed by
default. To use a third-party collector, visit their GitHub/documentation page and follow their installation procedures.
-- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls Cyberpower UPS data using
+- [CyberPower UPS](https://github.com/HawtDogFlvrWtr/netdata_cyberpwrups_plugin): Polls CyberPower UPS data using
PowerPanel® Personal Linux.
- [Logged-in users](https://github.com/veksh/netdata-numsessions): Collect the number of currently logged-on users.
- [nim-netdata-plugin](https://github.com/FedericoCeratto/nim-netdata-plugin): A helper to create native Netdata
diff --git a/collectors/README.md b/collectors/README.md
index ef1f9610c1..a37a7e890a 100644
--- a/collectors/README.md
+++ b/collectors/README.md
@@ -32,7 +32,7 @@ guide](/collectors/QUICKSTART.md).
[Monitor Nginx or Apache web server log files with Netdata](/docs/guides/collect-apache-nginx-web-logs.md)
-[Monitor CockroadchDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md)
+[Monitor CockroachDB metrics with Netdata](/docs/guides/monitor-cockroachdb.md)
[Monitor Unbound DNS servers with Netdata](/docs/guides/collect-unbound-metrics.md)
@@ -40,7 +40,7 @@ guide](/collectors/QUICKSTART.md).
## Related features
-**[Dashboards](/web/README.md)**: Vizualize your newly-collect metrics in real-time using Netdata's [built-in
+**[Dashboards](/web/README.md)**: Visualize your newly-collect metrics in real-time using Netdata's [built-in
dashboard](/web/gui/README.md).
**[Backends](/backends/README.md)**: Extend our built-in [database engine](/database/engine/README.md), which supports
diff --git a/collectors/REFERENCE.md b/collectors/REFERENCE.md
index 08a405dc7b..9c6f0a61ed 100644
--- a/collectors/REFERENCE.md
+++ b/collectors/REFERENCE.md
@@ -46,7 +46,7 @@ However, there are cases that auto-detection fails. Usually, the reason is that
allow Netdata to connect. In most of the cases, allowing the user `netdata` from `localhost` to connect and collect
metrics, will automatically enable data collection for the application in question (it will require a Netdata restart).
-View our [collectors quickstart](/collectors/QUICKSTART.md) for explict details on enabling and configuring collector modules.
+View our [collectors quickstart](/collectors/QUICKSTART.md) for explicit details on enabling and configuring collector modules.
## Troubleshoot a collector
@@ -112,7 +112,7 @@ This section features a list of Netdata's plugins, with a boolean setting to ena
# charts.d = yes
```
-By default, most plugins are enabled, so you don't need to enable them explicity to use their collectors. To enable or
+By default, most plugins are enabled, so you don't need to enable them explicitly to use their collectors. To enable or
disable any specific plugin, remove the comment (`#`) and change the boolean setting to `yes` or `no`.
All **external plugins** are managed by [plugins.d](plugins.d/), which provides additional management options.
diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md
index 5529226961..d10af1cdd3 100644
--- a/collectors/apps.plugin/README.md
+++ b/collectors/apps.plugin/README.md
@@ -59,7 +59,7 @@ Each of these sections provides the same number of charts:
- Pipes open (`apps.pipes`)
- Swap memory
- Swap memory used (`apps.swap`)
- - Major page faults (i.e. swap activiy, `apps.major_faults`)
+ - Major page faults (i.e. swap activity, `apps.major_faults`)
- Network
- Sockets open (`apps.sockets`)
diff --git a/collectors/cgroups.plugin/README.md b/collectors/cgroups.plugin/README.md
index 9b26deb2ce..21dbcae83f 100644
--- a/collectors/cgroups.plugin/README.md
+++ b/collectors/cgroups.plugin/README.md
@@ -145,7 +145,7 @@ Support per distribution:
|Fedora 25|YES|[here](http://pastebin.com/ax0373wF)||
|Debian 8|NO||can be enabled, see below|
|AMI|NO|[here](http://pastebin.com/FrxmptjL)|not a systemd system|
-|Centos 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below|
+|CentOS 7.3.1611|NO|[here](http://pastebin.com/SpzgezAg)|can be enabled, see below|
### how to enable cgroup accounting on systemd systems that is by default disabled
diff --git a/collectors/ebpf.plugin/README.md b/collectors/ebpf.plugin/README.md
index 44e238e36d..5ea3b49514 100644
--- a/collectors/ebpf.plugin/README.md
+++ b/collectors/ebpf.plugin/README.md
@@ -221,7 +221,7 @@ The following options are available:
- `ports`: Define the destination ports for Netdata to monitor.
- `hostnames`: The list of hostnames that can be resolved to an IP address.
- `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
- range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresess, but this
+ range of IPs, or use CIDR values. The default behavior is to only collect data for private IP addresses, but this
can be changed with the `ips` setting.
By default, Netdata displays up to 500 dimensions on network connection charts. If there are more possible dimensions,
@@ -275,7 +275,7 @@ curl -sSL https://raw.githubusercontent.com/netdata/kernel-collector/master/tool
If this script returns no output, your system is ready to compile and run the eBPF collector.
-If you see a warning about a missing kerkel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL
+If you see a warning about a missing kernel configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL
BPF_JIT`), you will need to recompile your kernel to support this configuration. The process of recompiling Linux
kernels varies based on your distribution and version. Read the documentation for your system's distribution to learn
more about the specific workflow for recompiling the kernel, ensuring that you set all the necessary
diff --git a/collectors/freeipmi.plugin/README.md b/collectors/freeipmi.plugin/README.md
index 64328fc9e7..52945e3c62 100644
--- a/collectors/freeipmi.plugin/README.md
+++ b/collectors/freeipmi.plugin/README.md
@@ -25,7 +25,7 @@ The plugin creates (up to) 8 charts, based on the information collected from IPM
1. number of sensors by state
2. number of events in SEL
-3. Temperatures CELCIUS
+3. Temperatures CELSIUS
4. Temperatures FAHRENHEIT
5. Voltages
6. Currents
diff --git a/collectors/node.d.plugin/stiebeleltron/README.md b/collectors/node.d.plugin/stiebeleltron/README.md
index 30f51169b3..59bbf703c4 100644
--- a/collectors/node.d.plugin/stiebeleltron/README.md
+++ b/collectors/node.d.plugin/stiebeleltron/README.md
@@ -40,7 +40,7 @@ The charts are configurable, however, the provided default configuration collect
- Heat circuit 1 room temperature in C (set/actual)
- Heat circuit 2 room temperature in C (set/actual)
-5. **Eletric Reheating**
+5. **Electric Reheating**
- Dual Mode Reheating temperature in C (hot water/heating)
@@ -68,7 +68,7 @@ If no configuration is given, the module will be disabled. Each `update_every` i
Original author: BrainDoctor (github)
-The module supports any metrics that are parseable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG.
+The module supports any metrics that are parsable with RegEx. There is no API that gives direct access to the values (AFAIK), so the "workaround" is to parse the HTML output of the ISG.
### Testing
diff --git a/collectors/perf.plugin/README.md b/collectors/perf.plugin/README.md
index d4bb41cb60..ccd185cedb 100644
--- a/collectors/perf.plugin/README.md
+++ b/collectors/perf.plugin/README.md
@@ -64,7 +64,7 @@ enable the perf plugin, edit /etc/netdata/netdata.conf and set:
You can use the `command options` parameter to pick what data should be collected and which charts should be
displayed. If `all` is used, all general performance monitoring counters are probed and corresponding charts
are enabled for the available counters. You can also define a particular set of enabled charts using the
-following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alighnment`,
+following keywords: `cycles`, `instructions`, `branch`, `cache`, `bus`, `stalled`, `migrations`, `alignment`,
`emulation`, `L1D`, `L1D-prefetch`, `L1I`, `LL`, `DTLB`, `ITLB`, `PBU`.
## Debugging
diff --git a/collectors/plugins.d/README.md b/collectors/plugins.d/README.md
index 913ad9177c..c166e11e36 100644
--- a/collectors/plugins.d/README.md
+++ b/collectors/plugins.d/README.md
@@ -79,7 +79,7 @@ Example:
```
The setting `enable running new plugins` sets the default behavior for all external plugins. It can be
-overriden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`.
+overridden for distinct plugins by modifying the appropriate plugin value configuration to either `yes` or `no`.
The setting `check for new plugins every` sets the interval between scans of the directory
`/usr/libexec/netdata/plugins.d`. New plugins can be added any time, and Netdata will detect them in a timely manner.
diff --git a/collectors/python.d.plugin/am2320/README.md b/collectors/python.d.plugin/am2320/README.md
index c17b33dfa1..14ddaa735d 100644
--- a/collectors/python.d.plugin/am2320/README.md
+++ b/collectors/python.d.plugin/am2320/README.md
@@ -6,7 +6,7 @@ sidebar_label: "AM2320"
# AM2320 sensor monitoring with netdata
-Displays a graph of the temperature and humity from a AM2320 sensor.
+Displays a graph of the temperature and humidity from a AM2320 sensor.
## Requirements
- Adafruit Circuit Python AM2320 library
@@ -28,10 +28,10 @@ cd /etc/netdata # Replace this path with your Netdata config directory, if dif
sudo ./edit-config python.d/am2320.conf
```
-Raspbery Pi Instructions:
+Raspberry Pi Instructions:
Hardware install:
-Connect the am2320 to the Raspbery Pi I2C pins
+Connect the am2320 to the Raspberry Pi I2C pins
Raspberry Pi 3B/4 Pins:
diff --git a/collectors/python.d.plugin/anomalies/README.md b/collectors/python.d.plugin/anomalies/README.md
index 8346aa6693..1e27f3b5be 100644
--- a/collectors/python.d.plugin/anomalies/README.md
+++ b/collectors/python.d.plugin/anomalies/README.md
@@ -134,7 +134,7 @@ local:
diffs_n: 1
# What is the typical proportion of anomalies in your data on average?
- # This paramater can control the sensitivity of your models to anomalies.
+ # This parameter can control the sensitivity of your models to anomalies.
# Some discussion here: https://github.com/yzhao062/pyod/issues/144
contamination: 0.001
@@ -142,7 +142,7 @@ local:
# just the average of all anomaly probabilities at each time step
include_average_prob: true
- # Define any custom models you would like to create anomaly probabilties for, some examples below to show how.
+ # Define any custom models you would like to create anomaly probabilities for, some examples below to show how.
# For example below example creates two custom models, one to run anomaly detection user and system cpu for our demo servers
# and one on the cpu and mem apps metrics for the python.d.plugin.
# custom_models:
@@ -161,7 +161,7 @@ local:
In the `anomalies.conf` file you can also define some "custom models" which you can use to group one or more metrics into a single model much like is done by default for the charts you specify. This is useful if you have a handful of metrics that exist in different charts but perhaps are related to the same underlying thing you would like to perform anomaly detection on, for example a specific app or user.
-To define a custom model you would include configuation like below in `anomalies.conf`. By default there should already be some commented out examples in there.
+To define a custom model you would include configuration like below in `anomalies.conf`. By default there should already be some commented out examples in there.
`name` is a name you give your custom model, this is what will appear alongside any other specified charts in the `anomalies.probability` and `anomalies.anomaly` charts. `dimensions` is a string of metrics you want to include in your custom model. By default the [netdata-pandas](https://github.com/netdata/netdata-pandas) library used to pull the data from Netdata uses a "chart.a|dim.1" type of naming convention in the pandas columns it returns, hence the `dimensions` string should look like "chart.name|dimension.name,chart.name|dimension.name". The examples below hopefully make this clear.
@@ -194,7 +194,7 @@ sudo su -s /bin/bash netdata
/usr/libexec/netdata/plugins.d/python.d.plugin anomalies debug trace nolock
```
-## Deepdive turorial
+## Deepdive tutorial
If you would like to go deeper on what exactly the anomalies collector is doing under the hood then check out this [deepdive tutorial](https://github.com/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) in our community repo where you can play around with some data from our demo servers (or your own if its accessible to you) and work through the calculations step by step.
@@ -206,7 +206,7 @@ If you would like to go deeper on what exactly the anomalies collector is doing
- Python 3 is also required for the underlying ML libraries of [numba](https://pypi.org/project/numba/), [scikit-learn](https://pypi.org/project/scikit-learn/), and [PyOD](https://pypi.org/project/pyod/).
- It may take a few hours or so (depending on your choice of `train_secs_n`) for the collector to 'settle' into it's typical behaviour in terms of the trained models and probabilities you will see in the normal running of your node.
- As this collector does most of the work in Python itself, with [PyOD](https://pyod.readthedocs.io/en/latest/) leveraging [numba](https://numba.pydata.org/) under the hood, you may want to try it out first on a test or development system to get a sense of its performance characteristics on a node similar to where you would like to use it.
-- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexibile in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)).
+- `lags_n`, `smooth_n`, and `diffs_n` together define the preprocessing done to the raw data before models are trained and before each prediction. This essentially creates a [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)#:~:text=In%20pattern%20recognition%20and%20machine,features%20that%20represent%20some%20object.&text=Feature%20vectors%20are%20often%20combined,score%20for%20making%20a%20prediction.) for each chart model (or each custom model). The default settings for these parameters aim to create a rolling matrix of recent smoothed [differenced](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average#Differencing) values for each chart. The aim of the model then is to score how unusual this 'matrix' of features is for each chart based on what it has learned as 'normal' from the training data. So as opposed to just looking at the single most recent value of a dimension and considering how strange it is, this approach looks at a recent smoothed window of all dimensions for a chart (or dimensions in a custom model) and asks how unusual the data as a whole looks. This should be more flexible in capturing a wider range of [anomaly types](https://andrewm4894.com/2020/10/19/different-types-of-time-series-anomalies/) and be somewhat more robust to temporary 'spikes' in the data that tend to always be happening somewhere in your metrics but often are not the most important type of anomaly (this is all covered in a lot more detail in the [deepdive tutorial](https://nbviewer.jupyter.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb)).
- You can see how long model training is taking by looking in the logs for the collector `grep 'anomalies' /var/log/netdata/error.log | grep 'training'` and you should see lines like `2020-12-01 22:02:14: python.d INFO: anomalies[local] : training complete in 2.81 seconds (runs_counter=2700, model=pca, train_n_secs=14400, models=26, n_fit_success=26, n_fit_fails=0, after=1606845731, before=1606860131).`.
- This also gives counts of the number of models, if any, that failed to fit and so had to default back to the DefaultModel (which is currently [HBOS](https://pyod.readthedocs.io/en/latest/_modules/pyod/models/hbos.html)).
- `after` and `before` here refer to the start and end of the training data used to train the models.
@@ -215,8 +215,8 @@ If you would like to go deeper on what exactly the anomalies collector is doing
- Typically ~3%-3.5% additional cpu usage from scoring, jumping to ~60% for a couple of seconds during model training.
- About ~150mb of ram (`apps.mem`) being continually used by the `python.d.plugin`.
- If you activate this collector on a fresh node, it might take a little while to build up enough data to calculate a realistic and useful model.
-- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relativley high `update_every` maybe 10, 15 or 30 in `anomalies.conf`.
-- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisey model if you set `train_n_secs` to be too small for how your node tends to behave.
+- Some models like `iforest` can be comparatively expensive (on same n1-standard-2 system above ~2s runtime during predict, ~40s training time, ~50% cpu on both train and predict) so if you would like to use it you might be advised to set a relatively high `update_every` maybe 10, 15 or 30 in `anomalies.conf`.
+- Setting a higher `train_every_n` and `update_every` is an easy way to devote less resources on the node to anomaly detection. Specifying less charts and a lower `train_n_secs` will also help reduce resources at the expense of covering less charts and maybe a more noisy model if you set `train_n_secs` to be too small for how your node tends to behave.
## Useful links and further reading
diff --git a/collectors/python.d.plugin/dovecot/README.md b/collectors/python.d.plugin/dovecot/README.md
index 55aeed3eb5..730b64257b 100644
--- a/collectors/python.d.plugin/dovecot/README.md
+++ b/collectors/python.d.plugin/dovecot/README.md
@@ -38,8 +38,8 @@ Module gives information with following charts:
5. **Context Switches**
- - volountary
- - involountary
+ - voluntary
+ - involuntary
6. **disk** in bytes/s
diff --git a/collectors/python.d.plugin/go_expvar/README.md b/collectors/python.d.plugin/go_expvar/README.md
index 66ebc0b67b..a73610e7a1 100644
--- a/collectors/python.d.plugin/go_expvar/README.md
+++ b/collectors/python.d.plugin/go_expvar/README.md
@@ -69,7 +69,7 @@ Sample output:
```json
{
"cmdline": ["./expvar-demo-binary"],
-"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, <ommited for brevity>}
+"memstats": {"Alloc":630856,"TotalAlloc":630856,"Sys":3346432,"Lookups":27, <omitted for brevity>}
}
```
diff --git a/collectors/python.d.plugin/mongodb/README.md b/collectors/python.d.plugin/mongodb/README.md
index 5d5295aa46..c0df123d7a 100644
--- a/collectors/python.d.plugin/mongodb/README.md
+++ b/collectors/python.d.plugin/mongodb/README.md
@@ -80,7 +80,7 @@ Number of charts depends on mongodb version, storage engine and other features (
13. **Cache metrics** (WiredTiger):
- percentage of bytes currently in the cache (amount of space taken by cached data)
- - percantage of tracked dirty bytes in the cache (amount of space taken by dirty data)
+ - percentage of tracked dirty bytes in the cache (amount of space taken by dirty data)
14. **Pages evicted from cache** (WiredTiger):
diff --git a/collectors/python.d.plugin/mysql/README.md b/collectors/python.d.plugin/mysql/README.md
index 5b9feadd54..d8d3c1d0b1 100644
--- a/collectors/python.d.plugin/mysql/README.md
+++ b/collectors/python.d.plugin/mysql/README.md
@@ -67,7 +67,7 @@ This module will produce following charts (if data is available):
- immediate
- waited
-6. **Table Select Join Issuess** in joins/s
+6. **Table Select Join Issues** in joins/s
- full join
- full range join
@@ -75,7 +75,7 @@ This module will produce following charts (if data is available):
- range check
- scan
-7. **Table Sort Issuess** in joins/s
+7. **Table Sort Issues** in joins/s
- merge passes
- range
@@ -164,7 +164,7 @@ This module will produce following charts (if data is available):
- updated
- deleted
-24. **InnoDB Buffer Pool Pagess** in pages
+24. **InnoDB Buffer Pool Pages** in pages
- data
- dirty
diff --git a/collectors/python.d.plugin/postgres/README.md b/collectors/python.d.plugin/postgres/README.md
index 67cc8fe323..3d573d6dcc 100644
--- a/collectors/python.d.plugin/postgres/README.md
+++ b/collectors/python.d.plugin/postgres/README.md
@@ -22,7 +22,7 @@ Following charts are drawn:
- active
-3. **Current Backend Processe Usage** percentage
+3. **Current Backend Process Usage** percentage
- used
- available
diff --git a/collectors/python.d.plugin/proxysql/README.md b/collectors/python.d.plugin/proxysql/README.md
index 6f4ca69131..f1b369a446 100644
--- a/collectors/python.d.plugin/proxysql/README.md
+++ b/collectors/python.d.plugin/proxysql/README.md
@@ -31,7 +31,7 @@ It produces:
- questions: total number of queries sent from frontends
- slow_queries: number of queries that ran for longer than the threshold in milliseconds defined in global variable `mysql-long_query_time`
-3. **Overall Bandwith (backends)**
+3. **Overall Bandwidth (backends)**
- in
- out
@@ -45,7 +45,7 @@ It produces:
- `4=OFFLINE_HARD`: when a server is put into OFFLINE_HARD mode, the existing connections are dropped, while new incoming connections aren't accepted either. This is equivalent to deleting the server from a hostgroup, or temporarily taking it out of the hostgroup for maintenance work
- `-1`: Unknown status
-5. **Bandwith (backends)**
+5. **Bandwidth (backends)**
- Backends
- in
diff --git a/collectors/python.d.plugin/samba/README.md b/collectors/python.d.plugin/samba/README.md
index 2c86e7b609..ed26d28718 100644
--- a/collectors/python.d.plugin/samba/README.md
+++ b/collectors/python.d.plugin/samba/README.md
@@ -21,7 +21,7 @@ It produces the following charts:
1. **Syscall R/Ws** in kilobytes/s
- sendfile
- - recvfle
+ - recvfile
2. **Smb2 R/Ws** in kilobytes/s
diff --git a/collectors/python.d.plugin/springboot/README.md b/collectors/python.d.plugin/springboot/README.md
index 46bc2d3568..f38e8bf05a 100644
--- a/collectors/python.d.plugin/springboot/README.md
+++ b/collectors/python.d.plugin/springboot/README.md
@@ -93,7 +93,7 @@ Please refer [Spring Boot Actuator: Production-ready Features](https://docs.spri
- MarkSweep
- ...
-4. **Heap Mmeory Usage** in KB
+4. **Heap Memory Usage** in KB
- used
- committed
diff --git a/collectors/statsd.plugin/README.md b/collectors/statsd.plugin/README.md
index d5bc0d1ad5..332b60e735 100644
--- a/collectors/statsd.plugin/README.md
+++ b/collectors/statsd.plugin/README.md
@@ -38,7 +38,7 @@ Netdata fully supports the statsd protocol. All statsd client libraries can be u
`:value` can be omitted and statsd will assume it is `1`. `|c`, `|C` and `|m` can be omitted an statsd will assume it is `|m`. So, the application may send just `name` and statsd will parse it as `name:1|m`.
- For counters use `|c` (esty/statsd compatible) or `|C` (brubeck compatible), for meters use `|m`.
+ For counters use `|c` (etsy/statsd compatible) or `|C` (brubeck compatible), for meters use `|m`.
Sampling rate is supported (check below).
@@ -290,7 +290,7 @@ dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS
`pattern` is a keyword. When set, `METRIC` is expected to be a Netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions.
-`TYPE`, `MUTLIPLIER`, `DIVIDER` and `OPTIONS` are optional.
+`TYPE`, `MULTIPLIER`, `DIVIDER` and `OPTIONS` are optional.
`TYPE` can be:
diff --git a/collectors/tc.plugin/README.md b/collectors/tc.plugin/README.md
index 70e31c236b..480076087e 100644
--- a/collectors/tc.plugin/README.md
+++ b/collectors/tc.plugin/README.md
@@ -172,7 +172,7 @@ And this is what you are going to get:
## QoS Configuration with tc
-First, setup the tc rules in rc.local using commands to assign different DSCP markings to different classids. You can see one such example in [github issue #4563](https://github.com/netdata/netdata/issues/4563#issuecomment-455711973).
+First, setup the tc rules in rc.local using commands to assign different QoS markings to different classids. You can see one such example in [github issue #4563](https://github.com/netdata/netdata/issues/4563#issuecomment-455711973).
Then, map the classids to names by creating `/etc/iproute2/tc_cls`. For example: