summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorJoel Hans <joel@netdata.cloud>2020-06-12 09:42:58 -0700
committerGitHub <noreply@github.com>2020-06-12 09:42:58 -0700
commit2c64795b7cc4e21a9382f863ae354b137b367b45 (patch)
treeb714798283617f51e4e97a328beae1e9fbf46b0e /docs
parent68f1888227bac1602d8777742995e0276bf05510 (diff)
Change streaming terminology to parent/child in docs (#9312)
* Intial pass through docs * Dash instead of slash * To parent/child * Child nodes * Change diagrams * Allowlist * Fixes for Andrew * Remove from build_external * Change in proc
Diffstat (limited to 'docs')
-rw-r--r--docs/guides/longer-metrics-storage.md2
-rw-r--r--docs/guides/monitor-hadoop-cluster.md2
-rw-r--r--docs/guides/step-by-step/step-09.md2
-rw-r--r--docs/guides/using-host-labels.md32
-rw-r--r--docs/netdata-security.md10
5 files changed, 27 insertions, 21 deletions
diff --git a/docs/guides/longer-metrics-storage.md b/docs/guides/longer-metrics-storage.md
index 5c542f427f..328b724019 100644
--- a/docs/guides/longer-metrics-storage.md
+++ b/docs/guides/longer-metrics-storage.md
@@ -57,7 +57,7 @@ metrics. The default settings retain about two day's worth of metris on a system
[**See our database engine calculator**](https://learn.netdata.cloud/docs/agent/database/calculator) to help you
correctly set `dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many
-slave nodes you have, how many metrics your Agent collects, and more.
+child nodes you have, how many metrics your Agent collects, and more.
With the database engine active, you can back up your `/var/cache/netdata/dbengine/` folder to another location for
redundancy.
diff --git a/docs/guides/monitor-hadoop-cluster.md b/docs/guides/monitor-hadoop-cluster.md
index 17901f2815..1ca2c03e11 100644
--- a/docs/guides/monitor-hadoop-cluster.md
+++ b/docs/guides/monitor-hadoop-cluster.md
@@ -96,7 +96,7 @@ al-9866",
If Netdata can't access the `/jmx` endpoint for either a NameNode or DataNode, it will not be able to auto-detect and
collect metrics from your HDFS implementation.
-Zookeeper auto-detection relies on an accessible client port and a whitelisted `mntr` command. For more details on
+Zookeeper auto-detection relies on an accessible client port and a allow-listed `mntr` command. For more details on
`mntr`, see Zookeeper's documentation on [cluster
options](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_clusterOptions) and [Zookeeper
commands](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands).
diff --git a/docs/guides/step-by-step/step-09.md b/docs/guides/step-by-step/step-09.md
index da09db0588..7a478ee4ee 100644
--- a/docs/guides/step-by-step/step-09.md
+++ b/docs/guides/step-by-step/step-09.md
@@ -53,7 +53,7 @@ every second.
[**See our database engine calculator**](https://learn.netdata.cloud/docs/agent/database/calculator) to help you
correctly set `dbengine disk space` based on your needs. The calculator gives an accurate estimate based on how many
-slave nodes you have, how many metrics your Agent collects, and more.
+child nodes you have, how many metrics your Agent collects, and more.
```conf
[global]
diff --git a/docs/guides/using-host-labels.md b/docs/guides/using-host-labels.md
index f4de3debb8..9d235961ab 100644
--- a/docs/guides/using-host-labels.md
+++ b/docs/guides/using-host-labels.md
@@ -7,7 +7,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/usin
When you use Netdata to monitor and troubleshoot an entire infrastructure, whether that's dozens or hundreds of systems,
you need sophisticated ways of keeping everything organized. You need alarms that adapt to the system's purpose, or
-whether the `master` or `slave` in a streaming setup. You need properly-labeled metrics archiving so you can sort,
+whether the parent or child in a streaming setup. You need properly-labeled metrics archiving so you can sort,
correlate, and mash-up your data to your heart's content. You need to keep tabs on ephemeral Docker containers in a
Kubernetes cluster.
@@ -50,7 +50,7 @@ read the status of your agent. For example, from a VPS system running Debian 10:
{
...
"host_labels": {
- "_is_master": "false",
+ "_is_parent": "false",
"_virt_detection": "systemd-detect-virt",
"_container_detection": "none",
"_container": "unknown",
@@ -73,7 +73,7 @@ You may have noticed a handful of labels that begin with an underscore (`_`). Th
When Netdata starts, it captures relevant information about the system and converts them into automatically-generated
host labels. You can use these to logically organize your systems via health entities, exporting metrics,
-streaming/master status, and more.
+parent-child status, and more.
They capture the following:
@@ -82,29 +82,29 @@ They capture the following:
- CPU architecture, system cores, CPU frequency, RAM, and disk space
- Whether Netdata is running inside of a container, and if so, the OS and hardware details about the container's host
- What virtualization layer the system runs on top of, if any
-- Whether the system is a streaming master or slave
+- Whether the system is a streaming parent or child
If you want to organize your systems without manually creating host tags, try the automatic labels in some of the
features below.
## Host labels in streaming
-You may have noticed the `_is_master` and `_is_slave` automatic labels from above. Host labels are also now streamed
-from a slave to its master agent, which concentrates an entire infrastructure's OS, hardware, container, and
-virtualization information in one place: the master.
+You may have noticed the `_is_parent` and `_is_child` automatic labels from above. Host labels are also now
+streamed from a child to its parent node, which concentrates an entire infrastructure's OS, hardware, container,
+and virtualization information in one place: the parent.
-Now, if you'd like to remind yourself of how much RAM a certain slave system has, you can simply access
-`http://localhost:19999/host/SLAVE_NAME/api/v1/info` and reference the automatically-generated host labels from the
-slave system. It's a vastly simplified way of accessing critical information about your infrastructure.
+Now, if you'd like to remind yourself of how much RAM a certain child node has, you can access
+`http://localhost:19999/host/CHILD_HOSTNAME/api/v1/info` and reference the automatically-generated host labels from the
+child system. It's a vastly simplified way of accessing critical information about your infrastructure.
-> ⚠️ Because automatic labels for slave nodes are accessible via API calls, and contain sensitive information like
+> ⚠️ Because automatic labels for child nodes are accessible via API calls, and contain sensitive information like
> kernel and operating system versions, you should secure streaming connections with SSL. See the [streaming
> documentation](/streaming/README.md#securing-streaming-communications) for details. You may also want to use
> [access lists](/web/server/README.md#access-lists) or [expose the API only to LAN/localhost
> connections](/docs/netdata-security.md#expose-netdata-only-in-a-private-lan).
-You can also use `_is_master`, `_is_slave`, and any other host labels in both health entities and metrics exporting.
-Speaking of which...
+You can also use `_is_parent`, `_is_child`, and any other host labels in both health entities and metrics
+exporting. Speaking of which...
## Host labels in health entities
@@ -138,11 +138,11 @@ Or, by using one of the automatic labels, for only webserver systems running a s
host labels: _os_name = Debian*
```
-In a streaming configuration where a master agent is triggering alarms for its slaves, you could create health entities
-that apply only to slaves:
+In a streaming configuration where a parent node is triggering alarms for its child nodes, you could create health
+entities that apply only to child nodes:
```yaml
- host labels: _is_slave = true
+ host labels: _is_child = true
```
Or when ephemeral Docker nodes are involved:
diff --git a/docs/netdata-security.md b/docs/netdata-security.md
index 36ee6d5e9d..97b9bae939 100644
--- a/docs/netdata-security.md
+++ b/docs/netdata-security.md
@@ -40,7 +40,10 @@ There are a few cases however that raw source data are only exposed to processes
So, Netdata **plugins**, even those running with escalated capabilities or privileges, perform a **hard coded data collection job**. They do not accept commands from Netdata. The communication is strictly **unidirectional**: from the plugin towards the Netdata daemon. The original application data collected by each plugin do not leave the process they are collected, are not saved and are not transferred to the Netdata daemon. The communication from the plugins to the Netdata daemon includes only chart metadata and processed metric values.
-Netdata slaves streaming metrics to upstream Netdata servers, use exactly the same protocol local plugins use. The raw data collected by the plugins of slave Netdata servers are **never leaving the host they are collected**. The only data appearing on the wire are chart metadata and metric values. This communication is also **unidirectional**: slave Netdata servers never accept commands from master Netdata servers.
+Child nodes use the same protocol when streaming metrics to their parent nodes. The raw data collected by the plugins of
+child Netdata servers are **never leaving the host they are collected**. The only data appearing on the wire are chart
+metadata and metric values. This communication is also **unidirectional**: child nodes never accept commands from
+parent Netdata servers.
## Netdata is read-only
@@ -190,7 +193,10 @@ Of course, there are many more methods you could use to protect Netdata:
- If you are always under a static IP, you can use the script given above to allow direct access to your Netdata servers without authentication, from all your static IPs.
-- install all your Netdata in **headless data collector** mode, forwarding all metrics in real-time to a master Netdata server, which will be protected with authentication using an nginx server running locally at the master Netdata server. This requires more resources (you will need a bigger master Netdata server), but does not require any firewall changes, since all the slave Netdata servers will not be listening for incoming connections.
+- install all your Netdata in **headless data collector** mode, forwarding all metrics in real-time to a parent
+ Netdata server, which will be protected with authentication using an nginx server running locally at the parent
+ Netdata server. This requires more resources (you will need a bigger parent Netdata server), but does not require
+ any firewall changes, since all the child Netdata servers will not be listening for incoming connections.
## Anonymous Statistics