summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--backends/README.md4
-rw-r--r--backends/prometheus/README.md2
-rw-r--r--build_external/README.md19
-rw-r--r--collectors/proc.plugin/README.md6
-rw-r--r--daemon/config/README.md2
-rw-r--r--database/engine/README.md26
-rw-r--r--docs/guides/longer-metrics-storage.md2
-rw-r--r--docs/guides/monitor-hadoop-cluster.md2
-rw-r--r--docs/guides/step-by-step/step-09.md2
-rw-r--r--docs/guides/using-host-labels.md32
-rw-r--r--docs/netdata-security.md10
-rw-r--r--exporting/README.md4
-rw-r--r--exporting/prometheus/README.md2
-rw-r--r--packaging/DISTRIBUTIONS.md4
-rw-r--r--packaging/installer/methods/cloud-providers.md2
-rw-r--r--packaging/installer/methods/macos.md4
-rw-r--r--streaming/README.md265
-rw-r--r--web/server/README.md17
18 files changed, 210 insertions, 195 deletions
diff --git a/backends/README.md b/backends/README.md
index 4c8f897558..b932d1ceac 100644
--- a/backends/README.md
+++ b/backends/README.md
@@ -179,8 +179,8 @@ from your Netdata):
of times within each pattern). The patterns are checked against the hostname (the localhost is always checked as
`localhost`), allowing us to filter which hosts will be sent to the backend when this Netdata is a central Netdata
aggregating multiple hosts. A pattern starting with `!` gives a negative match. So to match all hosts named `*db*`
- except hosts containing `*slave*`, use `!*slave* *db*` (so, the order is important: the first pattern matching the
- hostname will be used - positive or negative).
+ except hosts containing `*child*`, use `!*child* *db*` (so, the order is important: the first pattern
+ matching the hostname will be used - positive or negative).
- `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any number of times
within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with `!`
diff --git a/backends/prometheus/README.md b/backends/prometheus/README.md
index c053ae719c..3b4719b970 100644
--- a/backends/prometheus/README.md
+++ b/backends/prometheus/README.md
@@ -356,7 +356,7 @@ For more information check prometheus documentation.
### Streaming data from upstream hosts
-The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the master/slave
+The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the parent-child
functionality of Netdata this ignores any upstream hosts - so you should consider using the below in your
**prometheus.yml**:
diff --git a/build_external/README.md b/build_external/README.md
index d04851e28e..8305836b02 100644
--- a/build_external/README.md
+++ b/build_external/README.md
@@ -10,10 +10,11 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/build_external/R
This wraps the build-system in Docker so that the host system and the target system are
decoupled. This allows:
-* Cross-compilation (e.g. linux development from macOS)
-* Cross-distro (e.g. using CentOS user-land while developing on Debian)
-* Multi-host scenarios (e.g. master/slave configurations)
-* Bleeding-edge sceneraios (e.g. using the ACLK (**currently for internal-use only**))
+
+- Cross-compilation (e.g. linux development from macOS)
+- Cross-distro (e.g. using CentOS user-land while developing on Debian)
+- Multi-host scenarios (e.g. parent-child configurations)
+- Bleeding-edge sceneraios (e.g. using the ACLK (**currently for internal-use only**))
The advantage of these scenarios is that they allow **reproducible** builds and testing
for developers. This is the first iteration of the build-system to allow the team to use
@@ -97,19 +98,19 @@ Note: it is possible to run multiple copies of the agent using the `--scale` opt
Distro=debian Version=10 docker-compose -f projects/only-agent/docker-compose.yml up --scale agent=3
```
-3. A simple master-slave scenario
+3. A simple parent-child scenario
```bash
-# Need to call clean-install on the configs used in the master/slave containers
-docker-compose -f master-slaves/docker-compose.yml up --scale agent_slave1=2
+# Need to call clean-install on the configs used in the parent-child containers
+docker-compose -f parent-child/docker-compose.yml up --scale agent_child1=2
```
Note: this is not production ready yet, but it is left in so that we can see how it behaves
and improve it. Currently it produces the following problems:
* Only the base-configuration in the compose without scaling works.
* The containers are hard-coded in the compose.
- * There is no way to separate the agent configurations, so running multiple agent slaves
- wth the same GUID kills the master which exits with a fatal condition.
+ * There is no way to separate the agent configurations, so running multiple agent child nodes with the same GUID kills
+ the parent which exits with a fatal condition.
4. The ACLK
diff --git a/collectors/proc.plugin/README.md b/collectors/proc.plugin/README.md
index 1c295f7af3..fed7a76c0f 100644
--- a/collectors/proc.plugin/README.md
+++ b/collectors/proc.plugin/README.md
@@ -86,8 +86,8 @@ By default, Netdata will enable monitoring metrics only when they are not zero.
Netdata categorizes all block devices in 3 categories:
-1. physical disks (i.e. block devices that does not have slaves and are not partitions)
-2. virtual disks (i.e. block devices that have slaves - like RAID devices)
+1. physical disks (i.e. block devices that do not have child devices and are not partitions)
+2. virtual disks (i.e. block devices that have child devices - like RAID devices)
3. disk partitions (i.e. block devices that are part of a physical disk)
Performance metrics are enabled by default for all disk devices, except partitions and not-mounted virtual disks. Of course, you can enable/disable monitoring any block device by editing the Netdata configuration file.
@@ -325,7 +325,7 @@ By default Netdata will enable monitoring metrics only when they are not zero. I
There are several alarms defined in `health.d/net.conf`.
-The tricky ones are `inbound packets dropped` and `inbound packets dropped ratio`. They have quite a strict policy so that they warn users about possible issues. These alarms can be annoying for some network configurations. It is especially true for some bonding configurations if an interface is a slave or a bonding interface itself. If it is expected to have a certain number of drops on an interface for a certain network configuration, a separate alarm with different triggering thresholds can be created or the existing one can be disabled for this specific interface. It can be done with the help of the [families](/health/REFERENCE.md#alarm-line-families) line in the alarm configuration. For example, if you want to disable the `inbound packets dropped` alarm for `eth0`, set `families: !eth0 *` in the alarm definition for `template: inbound_packets_dropped`.
+The tricky ones are `inbound packets dropped` and `inbound packets dropped ratio`. They have quite a strict policy so that they warn users about possible issues. These alarms can be annoying for some network configurations. It is especially true for some bonding configurations if an interface is a child or a bonding interface itself. If it is expected to have a certain number of drops on an interface for a certain network configuration, a separate alarm with different triggering thresholds can be created or the existing one can be disabled for this specific interface. It can be done with the help of the [families](/health/REFERENCE.md#alarm-line-families) line in the alarm configuration. For example, if you want to disable the `inbound packets dropped` alarm for `eth0`, set `families: !eth0 *` in the alarm definition for `template: inbound_packets_dropped`.
#### configuration
diff --git a/daemon/config/README.md b/daemon/config/README.md
index de8ed684f9..08580f3478 100644
--- a/daemon/config/README.md
+++ b/daemon/config/README.md
@@ -82,7 +82,7 @@ Please note that your data history will be lost if you have modified `history` p
| pthread stack size|auto-detected||||
| cleanup obsolete charts after seconds|`3600`|See [monitoring ephemeral containers](/collectors/cgroups.plugin/README.md#monitoring-ephemeral-containers), also sets the timeout for cleaning up obsolete dimensions|||
| gap when lost iterations above|`1`||||
-| cleanup orphan hosts after seconds|`3600`|How long to wait until automatically removing from the DB a remote Netdata host (slave) that is no longer sending data.|||
+| cleanup orphan hosts after seconds|`3600`|How long to wait until automatically removing from the DB a remote Netdata host (child) that is no longer sending data.|||
| delete obsolete charts files|`yes`|See [monitoring ephemeral containers](/collectors/cgroups.plugin/README.md#monitoring-ephemeral-containers), also affects the deletion of files for obsolete dimensions|||
| delete orphan hosts files|`yes`|Set to `no` to disable non-responsive host removal.|||
| enable zero metrics|`no`|Set to `yes` to show charts when all their metrics are zero.|||
diff --git a/database/engine/README.md b/database/engine/README.md
index 18f5e9c88f..3a74b58541 100644
--- a/database/engine/README.md
+++ b/database/engine/README.md
@@ -46,18 +46,18 @@ The `dbengine disk space` option determines the amount of disk space in **MiB**
metric values and all related metadata describing them.
Use the [**database engine calculator**](https://learn.netdata.cloud/docs/agent/database/calculator) to correctly set
-`dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many slave nodes you
-have, how many metrics your Agent collects, and more.
+`dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many child nodes
+you have, how many metrics your Agent collects, and more.
### Streaming metrics to the database engine
-When streaming metrics, the Agent on the master node creates one instance of the database engine for itself, and another
-instance for every slave node it receives metrics from. If you have four streaming nodes, you will have five instances
-in total (`1 master + 4 slaves = 5 instances`).
+When streaming metrics, the Agent on the parent node creates one instance of the database engine for itself, and another
+instance for every child node it receives metrics from. If you have four streaming nodes, you will have five instances
+in total (`1 parent + 4 child nodes = 5 instances`).
The Agent allocates resources for each instance separately using the `dbengine disk space` setting. If `dbengine disk
space` is set to the default `256`, each instance is given 256 MiB in disk space, which means the total disk space
-required to store all instances is, roughly, `256 MiB * 1 master * 4 slaves = 1280 MiB`.
+required to store all instances is, roughly, `256 MiB * 1 parent * 4 child nodes = 1280 MiB`.
See the [database engine calculator](https://learn.netdata.cloud/docs/agent/database/calculator) to help you correctly
set `dbengine disk space` and undertand the toal disk space required based on your streaming setup.
@@ -90,14 +90,14 @@ validate the memory requirements for your particular system(s) and configuration
### File descriptor requirements
-The Database Engine may keep a **significant** amount of files open per instance (e.g. per streaming slave or master
-server). When configuring your system you should make sure there are at least 50 file descriptors available per
+The Database Engine may keep a **significant** amount of files open per instance (e.g. per streaming child or
+parent server). When configuring your system you should make sure there are at least 50 file descriptors available per
`dbengine` instance.
Netdata allocates 25% of the available file descriptors to its Database Engine instances. This means that only 25% of
the file descriptors that are available to the Netdata service are accessible by dbengine instances. You should take
that into account when configuring your service or system-wide file descriptor limits. You can roughly estimate that the
-Netdata service needs 2048 file descriptors for every 10 streaming slave hosts when streaming is configured to use
+Netdata service needs 2048 file descriptors for every 10 streaming child hosts when streaming is configured to use
`memory mode = dbengine`.
If for example one wants to allocate 65536 file descriptors to the Netdata service on a systemd system one needs to
@@ -173,10 +173,10 @@ traffic so as to create the minimum possible interference with other application
## Evaluation
-We have evaluated the performance of the `dbengine` API that the netdata daemon uses internally. This is **not** the
-web API of netdata. Our benchmarks ran on a **single** `dbengine` instance, multiple of which can be running in a
-netdata master server. We used a server with an AMD Ryzen Threadripper 2950X 16-Core Processor and 2 disk drives, a
-Seagate Constellation ES.3 2TB magnetic HDD and a SAMSUNG MZQLB960HAJR-00007 960GB NAND Flash SSD.
+We have evaluated the performance of the `dbengine` API that the netdata daemon uses internally. This is **not** the web
+API of netdata. Our benchmarks ran on a **single** `dbengine` instance, multiple of which can be running in a Netdata
+parent node. We used a server with an AMD Ryzen Threadripper 2950X 16-Core Processor and 2 disk drives, a Seagate
+Constellation ES.3 2TB magnetic HDD and a SAMSUNG MZQLB960HAJR-00007 960GB NAND Flash SSD.
For our workload, we defined 32 charts with 128 metrics each, giving us a total of 4096 metrics. We defined 1 worker
thread per chart (32 threads) that generates new data points with a data generation interval of 1 second. The time axis
diff --git a/docs/guides/longer-metrics-storage.md b/docs/guides/longer-metrics-storage.md
index 5c542f427f..328b724019 100644
--- a/docs/guides/longer-metrics-storage.md
+++ b/docs/guides/longer-metrics-storage.md
@@ -57,7 +57,7 @@ metrics. The default settings retain about two day's worth of metris on a system
[**See our database engine calculator**](https://learn.netdata.cloud/docs/agent/database/calculator) to help you
correctly set `dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many
-slave nodes you have, how many metrics your Agent collects, and more.
+child nodes you have, how many metrics your Agent collects, and more.
With the database engine active, you can back up your `/var/cache/netdata/dbengine/` folder to another location for
redundancy.
diff --git a/docs/guides/monitor-hadoop-cluster.md b/docs/guides/monitor-hadoop-cluster.md
index 17901f2815..1ca2c03e11 100644
--- a/docs/guides/monitor-hadoop-cluster.md
+++ b/docs/guides/monitor-hadoop-cluster.md
@@ -96,7 +96,7 @@ al-9866",
If Netdata can't access the `/jmx` endpoint for either a NameNode or DataNode, it will not be able to auto-detect and
collect metrics from your HDFS implementation.
-Zookeeper auto-detection relies on an accessible client port and a whitelisted `mntr` command. For more details on
+Zookeeper auto-detection relies on an accessible client port and a allow-listed `mntr` command. For more details on
`mntr`, see Zookeeper's documentation on [cluster
options](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_clusterOptions) and [Zookeeper
commands](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands).
diff --git a/docs/guides/step-by-step/step-09.md b/docs/guides/step-by-step/step-09.md
index da09db0588..7a478ee4ee 100644
--- a/docs/guides/step-by-step/step-09.md
+++ b/docs/guides/step-by-step/step-09.md
@@ -53,7 +53,7 @@ every second.
[**See our database engine calculator**](https://learn.netdata.cloud/docs/agent/database/calculator) to help you
correctly set `dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many
-slave nodes you have, how many metrics your Agent collects, and more.
+child nodes you have, how many metrics your Agent collects, and more.
```conf
[global]
diff --git a/docs/guides/using-host-labels.md b/docs/guides/using-host-labels.md
index f4de3debb8..9d235961ab 100644
--- a/docs/guides/using-host-labels.md
+++ b/docs/guides/using-host-labels.md
@@ -7,7 +7,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/usin
When you use Netdata to monitor and troubleshoot an entire infrastructure, whether that's dozens or hundreds of systems,
you need sophisticated ways of keeping everything organized. You need alarms that adapt to the system's purpose, or
-whether the `master` or `slave` in a streaming setup. You need properly-labeled metrics archiving so you can sort,
+whether the parent or child in a streaming setup. You need properly-labeled metrics archiving so you can sort,
correlate, and mash-up your data to your heart's content. You need to keep tabs on ephemeral Docker containers in a
Kubernetes cluster.
@@ -50,7 +50,7 @@ read the status of your agent. For example, from a VPS system running Debian 10:
{
...
"host_labels": {
- "_is_master": "false",
+ "_is_parent": "false",
"_virt_detection": "systemd-detect-virt",
"_container_detection": "none",
"_container": "unknown",
@@ -73,7 +73,7 @@ You may have noticed a handful of labels that begin with an underscore (`_`). Th
When Netdata starts, it captures relevant information about the system and converts them into automatically-generated
host labels. You can use these to logically organize your systems via health entities, exporting metrics,
-streaming/master status, and more.
+parent-child status, and more.
They capture the following:
@@ -82,29 +82,29 @@ They capture the following:
- CPU architecture, system cores, CPU frequency, RAM, and disk space
- Whether Netdata is running inside of a container, and if so, the OS and hardware details about the container's host
- What virtualization layer the system runs on top of, if any
-- Whether the system is a streaming master or slave
+- Whether the system is a streaming parent or child
If you want to organize your systems without manually creating host tags, try the automatic labels in some of the
features below.
## Host labels in streaming
-You may have noticed the `_is_master` and `_is_slave` automatic labels from above. Host labels are also now streamed
-from a slave to its master agent, which concentrates an entire infrastructure's OS, hardware, container, and
-virtualization information in one place: the master.
+You may have noticed the `_is_parent` and `_is_child` automatic labels from above. Host labels are also now
+streamed from a child to its parent node, which concentrates an entire infrastructure's OS, hardware, container,
+and virtualization information in one place: the parent.
-Now, if you'd like to remind yourself of how much RAM a certain slave system has, you can simply access
-`http://localhost:19999/host/SLAVE_NAME/api/v1/info` and reference the automatically-generated host labels from the
-slave system. It's a vastly simplified way of accessing critical information about your infrastructure.
+Now, if you'd like to remind yourself of how much RAM a certain child node has, you can access
+`http://localhost:19999/host/CHILD_HOSTNAME/api/v1/info` and reference the automatically-generated host labels from the
+child system. It's a vastly simplified way of accessing critical information about your infrastructure.
-> ⚠️ Because automatic labels for slave nodes are accessible via API calls, and contain sensitive information like
+> ⚠️ Because automatic labels for child nodes are accessible via API calls, and contain sensitive information like
> kernel and operating system versions, you should secure streaming connections with SSL. See the [streaming
> documentation](/streaming/README.md#securing-streaming-communications) for details. You may also want to use
> [access lists](/web/server/README.md#access-lists) or [expose the API only to LAN/localhost
> connections](/docs/netdata-security.md#expose-netdata-only-in-a-private-lan).
-You can also use `_is_master`, `_is_slave`, and any other host labels in both health entities and metrics exporting.
-Speaking of which...
+You can also use `_is_parent`, `_is_child`, and any other host labels in both health entities and metrics
+exporting. Speaking of which...
## Host labels in health entities
@@ -138,11 +138,11 @@ Or, by using one of the automatic labels, for only webserver systems running a s
host labels: _os_name = Debian*
```
-In a streaming configuration where a master agent is triggering alarms for its slaves, you could create health entities
-that apply only to slaves:
+In a streaming configuration where a parent node is triggering alarms for its child nodes, you could create health
+entities that apply only to child nodes:
```yaml
- host labels: _is_slave = true
+ host labels: _is_child = true
```
Or when ephemeral Docker nodes are involved:
diff --git a/docs/netdata-security.md b/docs/netdata-security.md
index 36ee6d5e9d..97b9bae939 100644
--- a/docs/netdata-security.md
+++ b/docs/netdata-security.md
@@ -40,7 +40,10 @@ There are a few cases however that raw source data are only exposed to processes
So, Netdata **plugins**, even those running with escalated capabilities or privileges, perform a **hard coded data collection job**. They do not accept commands from Netdata. The communication is strictly **unidirectional**: from the plugin towards the Netdata daemon. The original application data collected by each plugin do not leave the process they are collected, are not saved and are not transferred to the Netdata daemon. The communication from the plugins to the Netdata daemon includes only chart metadata and processed metric values.
-Netdata slaves streaming metrics to upstream Netdata servers, use exactly the same protocol local plugins use. The raw data collected by the plugins of slave Netdata servers are **never leaving the host they are collected**. The only data appearing on the wire are chart metadata and metric values. This communication is also **unidirectional**: slave Netdata servers never accept commands from master Netdata servers.
+Child nodes use the same protocol when streaming metrics to their parent nodes. The raw data collected by the plugins of
+child Netdata servers are **never leaving the host they are collected**. The only data appearing on the wire are chart
+metadata and metric values. This communication is also **unidirectional**: child nodes never accept commands from
+parent Netdata servers.
## Netdata is read-only
@@ -190,7 +193,10 @@ Of course, there are many more methods you could use to protect Netdata:
- If you are always under a static IP, you can use the script given above to allow direct access to your Netdata servers without authentication, from all your static IPs.
-- install all your Netdata in **headless data collector** mode, forwarding all metrics in real-time to a master Netdata server, which will be protected with authentication using an nginx server running locally at the master Netdata server. This requires more resources (you will need a bigger master Netdata server), but does not require any firewall changes, since all the slave Netdata servers will not be listening for incoming connections.
+- install all your Netdata in **headless data collector** mode, forwarding all metrics in real-time to a parent
+ Netdata server, which will be protected with authentication using an nginx server running locally at the parent
+ Netdata server. This requires more resources (you will need a bigger parent Netdata server), but does not require
+ any firewall changes, since all the child Netdata servers will not be listening for incoming connections.
## Anonymous Statistics
diff --git a/exporting/README.md b/exporting/README.md
index 25e47db22b..a537405bf3 100644
--- a/exporting/README.md
+++ b/exporting/README.md
@@ -233,8 +233,8 @@ Options:
of times within each pattern). The patterns are checked against the hostname (the localhost is always checked as
`localhost`), allowing us to filter which hosts will be sent to the external database when this Netdata is a central
Netdata aggregating multiple hosts. A pattern starting with `!` gives a negative match. So to match all hosts named
- `*db*` except hosts containing `*slave*`, use `!*slave* *db*` (so, the order is important: the first pattern
- matching the hostname will be used - positive or negative).
+ `*db*` except hosts containing `*child*`, use `!*child* *db*` (so, the order is important: the first
+ pattern matching the hostname will be used - positive or negative).
- `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any number of times
within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with `!`
diff --git a/exporting/prometheus/README.md b/exporting/prometheus/README.md
index 9e17d43f46..d718a366eb 100644
--- a/exporting/prometheus/README.md
+++ b/exporting/prometheus/README.md
@@ -357,7 +357,7 @@ For more information check Prometheus documentation.
### Streaming data from upstream hosts
-The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the master/slave
+The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the parent-child
functionality of Netdata this ignores any upstream hosts - so you should consider using the below in your
**prometheus.yml**:
diff --git a/packaging/DISTRIBUTIONS.md b/packaging/DISTRIBUTIONS.md
index 9fab0c6d26..b05e1ad66c 100644
--- a/packaging/DISTRIBUTIONS.md
+++ b/packaging/DISTRIBUTIONS.md
@@ -125,7 +125,9 @@ This is the brand new database engine capability of netdata. It is a mandatory f
#### Encryption Support (HTTPS)
-This is Netdata's TLS capability that incorporates encryption on the web server and the APIs between master and slaves. Also a mandatory facility for Netdata, but remains optional for users who are limited or not interested in tight security
+This is Netdata's TLS capability that incorporates encryption on the web server and the APIs between parent and child
+nodes. Also a mandatory facility for Netdata, but remains optional for users who are limited or not interested in tight
+security
|make/make install|netdata-installer.sh|kickstart.sh|kickstart-static64.sh|Docker image|RPM packaging|DEB packaging|
|:---------------:|:------------------:|:----------:|:-------------------:|:----------:|:-----------:|:-----------:|
diff --git a/packaging/installer/methods/cloud-providers.md b/packaging/installer/methods/cloud-providers.md
index 90989548b3..a67bc56d58 100644
--- a/packaging/installer/methods/cloud-providers.md
+++ b/packaging/installer/methods/cloud-providers.md
@@ -9,7 +9,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/packaging/instal
Netdata is fully compatible with popular cloud providers like Google Cloud Platform (GCP), Amazon Web Services (AWS),
Azure, and others. You can install Netdata on cloud instances to monitor the apps/services running there, or use
-multiple instances in a [master/slave streaming](../../../streaming/README.md) configuration.
+multiple instances in a [parent-child streaming](/streaming/README.md) configuration.
In some cases, using Netdata on these cloud providers requires unique installation or configuration steps. This page
aims to document some of those steps for popular cloud providers.
diff --git a/packaging/installer/methods/macos.md b/packaging/installer/methods/macos.md
index ae3efaad93..1225dc38b1 100644
--- a/packaging/installer/methods/macos.md
+++ b/packaging/installer/methods/macos.md
@@ -9,8 +9,8 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/packaging/instal
Netdata works on macOS, albeit with some limitations. The number of charts displaying system metrics is limited, but you
can use any of Netdata's [external plugins](../../../collectors/plugins.d/README.md) to monitor any services you might
-have installed on your macOS system. You could also use a macOS system as the master node in a [streaming
-configuration](../../../streaming/README.md).
+have installed on your macOS system. You could also use a macOS system as the parent node in a [streaming
+configuration](/streaming/README.md).
We recommend installing Netdata with the community-created and -maintained [**Homebrew
package**](#install-netdata-with-the-homebrew-package).
diff --git a/streaming/README.md b/streaming/README.md
index 9ae5dacb5f..d768602d84 100644
--- a/streaming/README.md
+++ b/streaming/README.md
@@ -1,64 +1,70 @@
<!--
----
title: "Streaming and replication"
+description: "Replicate and mirror Netdata's metrics through real-time streaming from child to parent nodes. Then combine, correlate, and export."
custom_edit_url: https://github.com/netdata/netdata/edit/master/streaming/README.md
----
-->
# Streaming and replication
Each Netdata is able to replicate/mirror its database to another Netdata, by streaming collected
metrics, in real-time to it. This is quite different to [data archiving to third party time-series
-databases](/backends/README.md).
+databases](/exporting/README.md).
-When Netdata streams metrics to another Netdata, the receiving one is able to perform everything a Netdata instance is capable of:
+When Netdata streams metrics to another Netdata, the receiving one is able to perform everything a Netdata instance is
+capable of:
-- visualize them with a dashboard
-- run health checks that trigger alarms and send alarm notifications
-- archive metrics to a backend time-series database
+- Visualize metrics with a dashboard
+- Run health checks that trigger alarms and send alarm notifications
+- Export metrics to a external time-series database
+
+The nodes that send metrics are called **child** nodes, and the nodes that receive metrics are called **parent** nodes.
+There are also **proxies**, which collects metrics from a child and sends it to a parent.
## Supported configurations
### Netdata without a database or web API (headless collector)
-Local Netdata (`slave`), **without any database or alarms**, collects metrics and sends them to
-another Netdata (`master`).
+Local Netdata (child), **without any database or alarms**, collects metrics and sends them to another Netdata
+(parent).
-The node menu shows a list of all "databases streamed to" the master. Clicking one of those links allows the user to view the full dashboard of the `slave` Netdata. The URL has the form `http://master-host:master-port/host/slave-host/`.
+The node menu shows a list of all "databases streamed to" the parent. Clicking one of those links allows the user to
+view the full dashboard of the child node. The URL has the form
+`http://parent-host:parent-port/host/child-host/`.
-Alarms for the `slave` are served by the `master`.
+Alarms for the child are served by the parent.
-In this mode the `slave` is just a plain data collector. It spawns all external plugins, but instead
-of maintaining a local database and accepting dashboard requests, it streams all metrics to the
-`master`. The memory footprint is reduced significantly, to between 6 MiB and 40 MiB, depending on the enabled plugins. To reduce the memory usage as much as possible, refer to [running Netdata in embedded devices](/docs/Performance.md#running-netdata-in-embedded-devices).
+In this mode the child is just a plain data collector. It spawns all external plugins, but instead of maintaining a
+local database and accepting dashboard requests, it streams all metrics to the parent. The memory footprint is reduced
+significantly, to between 6 MiB and 40 MiB, depending on the enabled plugins. To reduce the memory usage as much as
+possible, refer to [running Netdata in embedded devices](/docs/Performance.md#running-netdata-in-embedded-devices).
-The same `master` can collect data for any number of `slaves`.
+The same parent can collect data for any number of child nodes.
### Database Replication
-Local Netdata (`slave`), **with a local database (and possibly alarms)**, collects metrics and
-sends them to another Netdata (`master`).
+Local Netdata (child), **with a local database (and possibly alarms)**, collects metrics and
+sends them to another Netdata (parent).
-The user can use all the functions **at both** `http://slave-ip:slave-port/` and
-`http://master-host:master-port/host/slave-host/`.
+The user can use all the functions **at both** `http://child-ip:child-port/` and
+`http://parent-host:parent-port/host/child-host/`.
-The `slave` and the `master` may have different data retention policies for the same metrics.
+The child and the parent may have different data retention policies for the same metrics.
-Alarms for the `slave` are triggered by **both** the `slave` and the `master` (and actually
+Alarms for the child are triggered by **both** the child and the parent (and actually
each can have different alarms configurations or have alarms disabled).
-Take a note, that custom chart names, configured on the `slave`, should be in the form `type.name` to work correctly. The `master` will truncate the `type` part and substitute the original chart `type` to store the name in the database.
+Take a note, that custom chart names, configured on the child, should be in the form `type.name` to work correctly. The parent will truncate the `type` part and substitute the original chart `type` to store the name in the database.
### Netdata proxies
-Local Netdata (`slave`), with or without a database, collects metrics and sends them to another
-Netdata (`proxy`), which may or may not maintain a database, which forwards them to another
-Netdata (`master`).
+Local Netdata (child), with or without a database, collects metrics and sends them to another
+Netdata (**proxy**), which may or may not maintain a database, which forwards them to another
+Netdata (parent).
-Alarms for the slave can be triggered by any of the involved hosts that maintains a database.
+Alarms for the child can be triggered by any of the involved hosts that maintains a database.
Any number of daisy chaining Netdata servers are supported, each with or without a database and
-with or without alarms for the `slave` metrics.
+with or without alarms for the child metrics.
### mix and match with backends
@@ -96,7 +102,9 @@ monitoring (there cannot be health monitoring without a database).
`[web].mode = none` disables the API (Netdata will not listen to any ports).
This also disables the registry (there cannot be a registry without an API).
-`accept a streaming request every seconds` can be used to set a limit on how often a master Netdata server will accept streaming requests from the slaves. 0 sets no limit, 1 means maximum once every second. If this is set, you may see error log entries "... too busy to accept new streaming request. Will be allowed in X secs".
+`accept a streaming request every seconds` can be used to set a limit on how often a parent node will accept streaming
+requests from its child nodes. 0 sets no limit, 1 means maximum once every second. If this is set, you may see error log
+entries "... too busy to accept new streaming request. Will be allowed in X secs".
```
[backend]
@@ -126,7 +134,7 @@ sending-receiving Netdata.
This is the section for the sending Netdata. On the receiving node, `[stream].enabled` can be `no`.
If it is `yes`, the receiving node will also stream the metrics to another node (i.e. it will be
-a `proxy`).
+a proxy).
```
[stream]
@@ -144,7 +152,7 @@ This is an overview of how these options can be combined:
| proxy with db|not `none`|not `none`|`yes`|possible|possible|yes|
| central netdata|not `none`|not `none`|`no`|possible|possible|yes|
-For the options to encrypt the data stream between the slave and the master, refer to [securing the communication](#securing-streaming-communications)
+For the options to encrypt the data stream between the child and the parent, refer to [securing the communication](#securing-streaming-communications)
##### options for the receiving node
@@ -166,7 +174,7 @@ all hosts pushed with this API key.
You can also add sections like this:
```sh
-# replace MACHINE_GUID with the slave /var/lib/netdata/registry/netdata.public.unique.id
+# replace MACHINE_GUID w