summaryrefslogtreecommitdiffstats
path: root/collectors
diff options
context:
space:
mode:
Diffstat (limited to 'collectors')
-rw-r--r--collectors/README.md25
-rw-r--r--collectors/apps.plugin/README.md44
-rw-r--r--collectors/cgroups.plugin/README.md48
-rw-r--r--collectors/charts.d.plugin/README.md24
-rw-r--r--collectors/charts.d.plugin/ap/README.md2
-rw-r--r--collectors/charts.d.plugin/apache/README.md8
-rw-r--r--collectors/charts.d.plugin/sensors/README.md2
-rw-r--r--collectors/diskspace.plugin/README.md2
-rw-r--r--collectors/fping.plugin/README.md8
-rw-r--r--collectors/freebsd.plugin/README.md2
-rw-r--r--collectors/freeipmi.plugin/README.md8
-rw-r--r--collectors/ioping.plugin/README.md10
-rw-r--r--collectors/macos.plugin/README.md2
-rw-r--r--collectors/nfacct.plugin/README.md4
-rw-r--r--collectors/node.d.plugin/README.md16
-rw-r--r--collectors/node.d.plugin/fronius/README.md4
-rw-r--r--collectors/node.d.plugin/named/README.md8
-rw-r--r--collectors/node.d.plugin/sma_webbox/README.md2
-rw-r--r--collectors/node.d.plugin/snmp/README.md8
-rw-r--r--collectors/node.d.plugin/stiebeleltron/README.md2
-rw-r--r--collectors/plugins.d/README.md90
-rw-r--r--collectors/proc.plugin/README.md26
-rw-r--r--collectors/python.d.plugin/README.md6
-rw-r--r--collectors/python.d.plugin/chrony/README.md2
-rw-r--r--collectors/python.d.plugin/dovecot/README.md2
-rw-r--r--collectors/python.d.plugin/fail2ban/README.md2
-rw-r--r--collectors/python.d.plugin/go_expvar/README.md20
-rw-r--r--collectors/python.d.plugin/haproxy/README.md2
-rw-r--r--collectors/python.d.plugin/httpcheck/README.md2
-rw-r--r--collectors/python.d.plugin/isc_dhcpd/README.md2
-rw-r--r--collectors/python.d.plugin/logind/README.md2
-rw-r--r--collectors/python.d.plugin/mongodb/README.md2
-rw-r--r--collectors/python.d.plugin/oracledb/README.md2
-rw-r--r--collectors/python.d.plugin/portcheck/README.md2
-rw-r--r--collectors/python.d.plugin/web_log/README.md6
-rw-r--r--collectors/statsd.plugin/README.md52
-rw-r--r--collectors/tc.plugin/README.md14
-rw-r--r--collectors/xenstat.plugin/README.md4
38 files changed, 233 insertions, 234 deletions
diff --git a/collectors/README.md b/collectors/README.md
index 7252138893..1407cb16cb 100644
--- a/collectors/README.md
+++ b/collectors/README.md
@@ -1,20 +1,20 @@
# Data collection plugins
-netdata supports **internal** and **external** data collection plugins:
+Netdata supports **internal** and **external** data collection plugins:
-- **internal** plugins are written in `C` and run as threads inside the netdata daemon.
+- **internal** plugins are written in `C` and run as threads inside the `netdat`a` daemon.
-- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the netdata daemon.
- They communicate with the netdata daemon via `pipes` (`stdout` communication).
+- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the `netdata` daemon.
+ They communicate with the `netdata` daemon via `pipes` (`stdout` communication).
-To minimize the number of processes spawn for data collection, netdata also supports **plugin orchestrators**.
+To minimize the number of processes spawn for data collection, Netdata also supports **plugin orchestrators**.
- **plugin orchestrators** are external plugins that do not collect any data by themeselves.
Instead they support data collection **modules** written in the language of the orchestrator.
Usually the orchestrator provides a higher level abstraction, making it ideal for writing new
data collection modules with the minimum of code.
- Currently netdata provides plugin orchestrators
+ Currently Netdata provides plugin orchestrators
BASH v4+ [charts.d.plugin](charts.d.plugin/),
node.js [node.d.plugin](node.d.plugin/) and
python v2+ (including v3) [python.d.plugin](python.d.plugin/).
@@ -42,7 +42,7 @@ plugin|lang|O/S|runs as|modular|description
[plugins.d](plugins.d/)|`C`|any|internal|-|implements the **external plugins** API and serves external plugins
[proc.plugin](proc.plugin/)|`C`|linux|internal|yes|collects resource usage and performance data on Linux systems
[python.d.plugin](python.d.plugin/)|`python` v2+|any|external|yes|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).
-[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for netdata
+[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for Netdata
[tc.plugin](tc.plugin/)|`C`|linux|internal|-|collects traffic QoS metrics (`tc`) of Linux network interfaces
## Enabling and Disabling plugins
@@ -59,7 +59,7 @@ All **external plugins** are managed by [plugins.d](plugins.d/), which provides
### Internal Plugins
-Each of the internal plugins runs as a thread inside the netdata daemon.
+Each of the internal plugins runs as a thread inside the `netdata` daemon.
Once this thread has started, the plugin may spawn additional threads according to its design.
#### Internal Plugins API
@@ -72,7 +72,7 @@ collect_data() {
collected_number collected_value = collect_a_value();
- // give the metrics to netdata
+ // give the metrics to Netdata
static RRDSET *st = NULL; // the chart
static RRDDIM *rd = NULL; // a dimension attached to this chart
@@ -100,20 +100,19 @@ collect_data() {
}
else {
// this chart is already created
- // let netdata know we start a new iteration on it
+ // let Netdata know we start a new iteration on it
rrdset_next(st);
}
// give the collected value(s) to the chart
rrddim_set_by_pointer(st, rd, collected_value);
- // signal netdata we are done with this iteration
+ // signal Netdata we are done with this iteration
rrdset_done(st);
}
```
-Of course netdata has a lot of libraries to help you also in collecting the metrics.
-The best way to find your way through this, is to examine what other similar plugins do.
+Of course, Netdata has a lot of libraries to help you also in collecting the metrics. The best way to find your way through this, is to examine what other similar plugins do.
### External Plugins
diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md
index ee5c6971ab..bf57ea648f 100644
--- a/collectors/apps.plugin/README.md
+++ b/collectors/apps.plugin/README.md
@@ -5,9 +5,9 @@
To achieve this task, it iterates through the whole process tree, collecting resource usage information
for every process found running.
-Since netdata needs to present this information in charts and track them through time,
+Since Netdata needs to present this information in charts and track them through time,
instead of presenting a `top` like list, `apps.plugin` uses a pre-defined list of **process groups**
-to which it assigns all running processes. This list is [customizable](apps_groups.conf) and netdata
+to which it assigns all running processes. This list is [customizable](apps_groups.conf) and Netdata
ships with a good default for most cases (to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
So, `apps.plugin` builds a process tree (much like `ps fax` does in Linux), and groups
@@ -15,7 +15,7 @@ processes together (evaluating both child and parent processes) so that the resu
a predefined set of members (of course, only process groups found running are reported).
> If you find that `apps.plugin` categorizes standard applications as `other`, we would be
-> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with netdata.
+> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with Netdata.
Unlike traditional process monitoring tools (like `top`), `apps.plugin` is able to account the resource
utilization of exit processes. Their utilization is accounted at their currently running parents.
@@ -26,9 +26,9 @@ that fork/spawn other short lived processes hundreds of times per second.
`apps.plugin` provides charts for 3 sections:
-1. Per application charts as **Applications** at netdata dashboards
-2. Per user charts as **Users** at netdata dashboards
-3. Per user group charts as **User Groups** at netdata dashboards
+1. Per application charts as **Applications** at Netdata dashboards
+2. Per user charts as **Users** at Netdata dashboards
+3. Per user group charts as **User Groups** at Netdata dashboards
Each of these sections provides the same number of charts:
@@ -64,7 +64,7 @@ The above are reported:
`apps.plugin` is a complex piece of software and has a lot of work to do
We are proud that `apps.plugin` is a lot faster compared to any other similar tool,
while collecting a lot more information for the processes, however the fact is that
-this plugin requires more CPU resources than the netdata daemon itself.
+this plugin requires more CPU resources than the `netdata` daemon itself.
Under Linux, for each process running, `apps.plugin` reads several `/proc` files
per process. Doing this work per-second, especially on hosts with several thousands
@@ -135,14 +135,14 @@ The order of the entries in this list is important: the first that matches a pro
ones at the top. Processes not matched by any row, will inherit it from their parents or children.
The order also controls the order of the dimensions on the generated charts (although applications started
-after apps.plugin is started, will be appended to the existing list of dimensions the netdata daemon maintains).
+after apps.plugin is started, will be appended to the existing list of dimensions the `netdata` daemon maintains).
## Permissions
`apps.plugin` requires additional privileges to collect all the information it needs.
The problem is described in issue #157.
-When netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
+When Netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
If this fails (i.e. `setcap` fails), `apps.plugin` is setuid to `root`.
#### linux capabilities in containers
@@ -158,15 +158,15 @@ chown root:netdata /usr/libexec/netdata/plugins.d/apps.plugin
chmod 4750 /usr/libexec/netdata/plugins.d/apps.plugin
```
-You will have to run these, every time you update netdata.
+You will have to run these, every time you update Netdata.
## Security
`apps.plugin` performs a hard-coded function of building the process tree in memory,
-iterating forever, collecting metrics for each running process and sending them to netdata.
-This is a one-way communication, from `apps.plugin` to netdata.
+iterating forever, collecting metrics for each running process and sending them to Netdata.
+This is a one-way communication, from `apps.plugin` to Netdata.
-So, since `apps.plugin` cannot be instructed by netdata for the actions it performs,
+So, since `apps.plugin` cannot be instructed by Netdata for the actions it performs,
we think it is pretty safe to allow it have these increased privileges.
Keep in mind that `apps.plugin` will still run without escalated permissions,
@@ -210,7 +210,7 @@ For more information about badges check [Generating Badges](../../web/api/badges
## Comparison with console tools
-Ssh to a server running netdata and execute this:
+SSH to a server running Netdata and execute this:
```sh
while true; do ls -l /var/run >/dev/null; done
@@ -318,24 +318,24 @@ FILE SYS Used Total 0.3 2.1 7009 netdata 0 S /usr/sbin/netdata
/ (vda1) 1.56G 29.5G 0.0 0.0 17 root 0 S oom_reaper
```
-#### why this happens?
+#### why does this happen?
All the console tools report usage based on the processes found running *at the moment they
examine the process tree*. So, they see just one `ls` command, which is actually very quick
with minor CPU utilization. But the shell, is spawning hundreds of them, one after another
(much like shell scripts do).
-#### what netdata reports?
+#### What does Netdata report?
The total CPU utilization of the system:
![image](https://cloud.githubusercontent.com/assets/2662304/21076212/9198e5a6-bf2e-11e6-9bc0-6bdea25befb2.png)
-<br/>_**Figure 1**: The system overview section at netdata, just a few seconds after the command was run_
+<br/>_**Figure 1**: The system overview section at Netdata, just a few seconds after the command was run_
And at the applications `apps.plugin` breaks down CPU usage per application:
![image](https://cloud.githubusercontent.com/assets/2662304/21076220/c9687848-bf2e-11e6-8d81-348592c5aca2.png)
-<br/>_**Figure 2**: The Applications section at netdata, just a few seconds after the command was run_
+<br/>_**Figure 2**: The Applications section at Netdata, just a few seconds after the command was run_
So, the `ssh` session is using 95% CPU time.
@@ -344,7 +344,7 @@ Why `ssh`?
`apps.plugin` groups all processes based on its configuration file
[`/etc/netdata/apps_groups.conf`](apps_groups.conf)
(to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
-The default configuration has nothing for `bash`, but it has for `sshd`, so netdata accumulates
+The default configuration has nothing for `bash`, but it has for `sshd`, so Netdata accumulates
all ssh sessions to a dimension on the charts, called `ssh`. This includes all the processes in
the process tree of `sshd`, **including the exited children**.
@@ -353,9 +353,9 @@ the process tree of `sshd`, **including the exited children**.
> `apps.plugin` does not use these mechanisms. The process grouping made by `apps.plugin` works
> on any Linux, `systemd` based or not.
-#### a more technical description of how netdata works
+#### a more technical description of how Netdata works
-netdata reads `/proc/<pid>/stat` for all processes, once per second and extracts `utime` and
+Netdata reads `/proc/<pid>/stat` for all processes, once per second and extracts `utime` and
`stime` (user and system cpu utilization), much like all the console tools do.
But it [also extracts `cutime` and `cstime`](https://github.com/netdata/netdata/blob/62596cc6b906b1564657510ca9135c08f6d4cdda/src/apps_plugin.c#L636-L642)
@@ -369,7 +369,7 @@ been reported for it prior to this iteration.
It is even trickier, because walking through the entire process tree takes some time itself. So,
if you sum the CPU utilization of all processes, you might have more CPU time than the reported
-total cpu time of the system. netdata solves this, by adapting the per process cpu utilization to
+total cpu time of the system. Netdata solves this, by adapting the per process cpu utilization to
the total of the system. [Netdata adds charts that document this normalization](https://london.my-netdata.io/default.html#menu_netdata_submenu_apps_plugin).
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fcollectors%2Fapps.plugin%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
diff --git a/collectors/cgroups.plugin/README.md b/collectors/cgroups.plugin/README.md
index 6ec9024da3..45d2da3be9 100644
--- a/collectors/cgroups.plugin/README.md
+++ b/collectors/cgroups.plugin/README.md
@@ -6,11 +6,11 @@ cgroups (or control groups), are a Linux kernel feature that provides accounting
cgroups are hierarchical, meaning that cgroups can contain child cgroups, which can contain more cgroups, etc. All accounting is reported (and resource usage limits are applied) also in a hierarchical way.
-To visualize cgroup metrics netdata provides configuration for cherry picking the cgroups of interest. By default (without any configuration) netdata should pick **systemd services**, all kinds of **containers** (lxc, docker, etc) and **virtual machines** spawn by managers that register them with cgroups (qemu, libvirt, etc).
+To visualize cgroup metrics Netdata provides configuration for cherry picking the cgroups of interest. By default (without any configuration) Netdata should pick **systemd services**, all kinds of **containers** (lxc, docker, etc) and **virtual machines** spawn by managers that register them with cgroups (qemu, libvirt, etc).
-## configuring netdata for cgroups
+## configuring Netdata for cgroups
-For each cgroup available in the system, netdata provides this configuration:
+For each cgroup available in the system, Netdata provides this configuration:
```
[plugin:cgroups]
@@ -21,9 +21,9 @@ But it also provides a few patterns to provide a sane default (`yes` or `no`).
Below we see, how this works.
-### how netdata finds the available cgroups
+### how Netdata finds the available cgroups
-Linux exposes resource usage reporting and provides dynamic configuration for cgroups, using virtual files (usually) under `/sys/fs/cgroup`. netdata reads `/proc/self/mountinfo` to detect the exact mount point of cgroups. netdata also allows manual configuration of this mount point, using these settings:
+Linux exposes resource usage reporting and provides dynamic configuration for cgroups, using virtual files (usually) under `/sys/fs/cgroup`. Netdata reads `/proc/self/mountinfo` to detect the exact mount point of cgroups. Netdata also allows manual configuration of this mount point, using these settings:
```
[plugin:cgroups]
@@ -34,27 +34,27 @@ Linux exposes resource usage reporting and provides dynamic configuration for cg
path to /sys/fs/cgroup/devices = /sys/fs/cgroup/devices
```
-netdata rescans these directories for added or removed cgroups every `check for new cgroups every` seconds.
+Netdata rescans these directories for added or removed cgroups every `check for new cgroups every` seconds.
### hierarchical search for cgroups
-Since cgroups are hierarchical, for each of the directories shown above, netdata walks through the subdirectories recursively searching for cgroups (each subdirectory is another cgroup).
+Since cgroups are hierarchical, for each of the directories shown above, Netdata walks through the subdirectories recursively searching for cgroups (each subdirectory is another cgroup).
-For each of the directories found, netdata provides a configuration variable:
+For each of the directories found, Netdata provides a configuration variable:
```
[plugin:cgroups]
search for cgroups under PATH = yes | no
```
-To provide a sane default for this setting, netdata uses the following pattern list (patterns starting with `!` give a negative match and their order is important: the first matching a path will be used):
+To provide a sane default for this setting, Netdata uses the following pattern list (patterns starting with `!` give a negative match and their order is important: the first matching a path will be used):
```
[plugin:cgroups]
search for cgroups in subpaths matching = !*/init.scope !*-qemu !/init.scope !/system !/systemd !/user !/user.slice *
```
-So, we disable checking for **child cgroups** in systemd internal cgroups ([systemd services are monitored by netdata](#monitoring-systemd-services)), user cgroups (normally used for desktop and remote user sessions), qemu virtual machines (child cgroups of virtual machines) and `init.scope`. All others are enabled.
+So, we disable checking for **child cgroups** in systemd internal cgroups ([systemd services are monitored by Netdata](#monitoring-systemd-services)), user cgroups (normally used for desktop and remote user sessions), qemu virtual machines (child cgroups of virtual machines) and `init.scope`. All others are enabled.
### unified cgroups (cgroups v2) support
@@ -71,14 +71,14 @@ Unified cgroups use same name pattern matching as v1 cgroups. `cgroup_enable_sys
### enabled cgroups
-To check if the cgroup is enabled, netdata uses this setting:
+To check if the cgroup is enabled, Netdata uses this setting:
```
[plugin:cgroups]
enable cgroup NAME = yes | no
```
-To provide a sane default, netdata uses the following pattern list (it checks the pattern against the path of the cgroup):
+To provide a sane default, Netdata uses the following pattern list (it checks the pattern against the path of the cgroup):
```
[plugin:cgroups]
@@ -87,9 +87,9 @@ To provide a sane default, netdata uses the following pattern list (it checks th
The above provides the default `yes` or `no` setting for the cgroup. However, there is an additional step. In many cases the cgroups found in the `/sys/fs/cgroup` hierarchy are just random numbers and in many cases these numbers are ephemeral: they change across reboots or sessions.
-So, we need to somehow map the paths of the cgroups to names, to provide consistent netdata configuration (i.e. there is no point to say `enable cgroup 1234 = yes | no`, if `1234` is a random number that changes over time - we need a name for the cgroup first, so that `enable cgroup NAME = yes | no` will be consistent).
+So, we need to somehow map the paths of the cgroups to names, to provide consistent Netdata configuration (i.e. there is no point to say `enable cgroup 1234 = yes | no`, if `1234` is a random number that changes over time - we need a name for the cgroup first, so that `enable cgroup NAME = yes | no` will be consistent).
-For this mapping netdata provides 2 configuration options:
+For this mapping Netdata provides 2 configuration options:
```
[plugin:cgroups]
@@ -99,11 +99,11 @@ For this mapping netdata provides 2 configuration options:
The whole point for the additional pattern list, is to limit the number of times the script will be called. Without this pattern list, the script might be called thousands of times, depending on the number of cgroups available in the system.
-The above pattern list is matched against the path of the cgroup. For matched cgroups, netdata calls the script [cgroup-name.sh](cgroup-name.sh.in) to get its name. This script queries `docker`, or applies heuristics to find give a name for the cgroup.
+The above pattern list is matched against the path of the cgroup. For matched cgroups, Netdata calls the script [cgroup-name.sh](cgroup-name.sh.in) to get its name. This script queries `docker`, or applies heuristics to find give a name for the cgroup.
### charts with zero metrics
-By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. For example:
+By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. For example:
```
[plugin:cgroups]
@@ -118,7 +118,7 @@ CPU and memory limits are watched and used to rise alarms. Memory usage for ever
## Monitoring systemd services
-netdata monitors **systemd services**. Example:
+Netdata monitors **systemd services**. Example:
![image](https://cloud.githubusercontent.com/assets/2662304/21964372/20cd7b84-db53-11e6-98a2-b9c986b082c0.png)
@@ -175,7 +175,7 @@ sudo systemctl daemon-reexec
(`systemctl daemon-reload` does not reload the configuration of the server - so you have to execute `systemctl daemon-reexec`).
-Now, when you run `systemd-cgtop`, services will start reporting usage (if it does not, restart a service - any service - to wake it up). Refresh your netdata dashboard, and you will have the charts too.
+Now, when you run `systemd-cgtop`, services will start reporting usage (if it does not, restart a service - any service - to wake it up). Refresh your Netdata dashboard, and you will have the charts too.
In case memory accounting is missing, you will need to enable it at your kernel, by appending the following kernel boot options and rebooting:
@@ -185,7 +185,7 @@ cgroup_enable=memory swapaccount=1
You can add the above, directly at the `linux` line in your `/boot/grub/grub.cfg` or appending them to the `GRUB_CMDLINE_LINUX` in `/etc/default/grub` (in which case you will have to run `update-grub` before rebooting). On DigitalOcean debian images you may have to set it at `/etc/default/grub.d/50-cloudimg-settings.cfg`.
-Which systemd services are monitored by netdata is determined by the following pattern list:
+Which systemd services are monitored by Netdata is determined by the following pattern list:
```
[plugin:cgroups]
@@ -196,27 +196,27 @@ Which systemd services are monitored by netdata is determined by the following p
## Monitoring ephemeral containers
-netdata monitors containers automatically when it is installed at the host, or when it is installed in a container that has access to the `/proc` and `/sys` filesystems of the host.
+Netdata monitors containers automatically when it is installed at the host, or when it is installed in a container that has access to the `/proc` and `/sys` filesystems of the host.
-netdata prior to v1.6 had 2 issues when such containers were monitored:
+Netdata prior to v1.6 had 2 issues when such containers were monitored:
1. network interface alarms where triggering when containers were stopped
2. charts were never cleaned up, so after some time dozens of containers were showing up on the dashboard, and they were occupying memory.
-### the current netdata
+### the current Netdata
network interfaces and cgroups (containers) are now self-cleaned.
-So, when a network interface or container stops, netdata might log a few errors in error.log complaining about files it cannot find, but immediately:
+So, when a network interface or container stops, Netdata might log a few errors in error.log complaining about files it cannot find, but immediately:
1. it will detect this is a removed container or network interface
2. it will freeze/pause all alarms for them
3. it will mark their charts as obsolete
4. obsolete charts are not be offered on new dashboard sessions (so hit F5 and the charts are gone)
5. existing dashboard sessions will continue to see them, but of course they will not refresh
-6. obsolete charts will be removed from memory, 1 hour after the last user viewed them (configurable with `[global].cleanup obsolete charts after seconds = 3600` (at netdata.conf).
+6. obsolete charts will be removed from memory, 1 hour after the last user viewed them (configurable with `[global].cleanup obsolete charts after seconds = 3600` (at `netdata.conf`).
7. when obsolete charts are removed from memory they are also deleted from disk (configurable with `[global].delete obsolete charts files = yes`)
diff --git a/collectors/charts.d.plugin/README.md b/collectors/charts.d.plugin/README.md
index 3d318f26cf..06fbd46b8b 100644
--- a/collectors/charts.d.plugin/README.md
+++ b/collectors/charts.d.plugin/README.md
@@ -1,10 +1,10 @@
# charts.d.plugin
-`charts.d.plugin` is a netdata external plugin. It is an **orchestrator** for data collection modules written in `BASH` v4+.
+`charts.d.plugin` is a Netdata external plugin. It is an **orchestrator** for data collection modules written in `BASH` v4+.
1. It runs as an independent process `ps fax` shows it
-2. It is started and stopped automatically by netdata
-3. It communicates with netdata via a unidirectional pipe (sending data to the netdata daemon)
+2. It is started and stopped automatically by Netdata
+3. It communicates with Netdata via a unidirectional pipe (sending data to the `netdata` daemon)
4. Supports any number of data collection **modules**
`charts.d.plugin` has been designed so that the actual script that will do data collection will be permanently in
@@ -43,7 +43,7 @@ For a module called `X`, the following criteria must be met:
2. If the module needs a configuration, it should be called `X.conf` and placed in `/etc/netdata/charts.d`.
The configuration file `X.conf` is also a BASH script itself.
- To edit the default files supplied by netdata run `/etc/netdata/edit-config charts.d/X.conf`,
+ To edit the default files supplied by Netdata, run `/etc/netdata/edit-config charts.d/X.conf`,
where `X` is the name of the module.
3. All functions and global variables defined in the script and its configuration, must begin with `X_`.
@@ -54,11 +54,11 @@ For a module called `X`, the following criteria must be met:
(following the standard Linux command line return codes: 0 = OK, the collector can operate and 1 = FAILED,
the collector cannot be used).
- - `X_create()` - creates the netdata charts, following the standard netdata plugin guides as described in
+ - `X_create()` - creates the Netdata charts, following the standard Netdata plugin guides as described in
**[External Plugins](../plugins.d/)** (commands `CHART` and `DIMENSION`).
The return value does matter: 0 = OK, 1 = FAILED.
- - `X_update()` - collects the values for the defined charts, following the standard netdata plugin guides
+ - `X_update()` - collects the values for the defined charts, following the standard Netdata plugin guides
as described in **[External Plugins](../plugins.d/)** (commands `BEGIN`, `SET`, `END`).
The return value also matters: 0 = OK, 1 = FAILED.
@@ -67,7 +67,7 @@ For a module called `X`, the following criteria must be met:
The module script may use more functions or variables. But all of them must begin with `X_`.
-The standard netdata plugin variables are also available (check **[External Plugins](../plugins.d/)**).
+The standard Netdata plugin variables are also available (check **[External Plugins](../plugins.d/)**).
### X_check()
@@ -80,7 +80,7 @@ connect to a local mysql database to find out if it can read the values it needs
### X_create()
-The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard netdata
+The purpose of the BASH function `X_create()` is to create the charts and dimensions using the standard Netdata
plugin guides (**[External Plugins](../plugins.d/)**).
`X_create()` will be called just once and only after `X_check()` was successful.
@@ -90,8 +90,8 @@ A non-zero return value will disable the collector.
### X_update()
-`X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to netdata,
-following the netdata plugin guides (**[External Plugins](../plugins.d/)**).
+`X_update()` will be called repeatedly every `X_update_every` seconds, to collect new values and send them to Netdata,
+following the Netdata plugin guides (**[External Plugins](../plugins.d/)**).
The function will be called with one parameter: microseconds since the last time it was run. This value should be
appended to the `BEGIN` statement of every chart updated by the collector script.
@@ -167,7 +167,7 @@ Keep in mind that if your configs are not in `/etc/netdata`, you should do the f
export NETDATA_USER_CONFIG_DIR="/path/to/etc/netdata"
```
-Also, remember that netdata runs `chart.d.plugin` as user `netdata` (or any other user netdata is configured to run as).
<