summaryrefslogtreecommitdiffstats
path: root/backends
diff options
context:
space:
mode:
authorPromise Akpan <akpanpromise@hotmail.com>2019-08-15 12:06:39 +0100
committerChris Akritidis <43294513+cakrit@users.noreply.github.com>2019-08-15 13:06:39 +0200
commitf5006d51e8caf9148d393eb68d53dc9fcd28b7b6 (patch)
tree03b757236d6b45e46813a4a875c77dc775e5f896 /backends
parent69172fd57472df75d877f43de2dcc693c48ab5c0 (diff)
Fix Markdown Lint warnings (#6664)
* make remark access all directories * detailed fix after autofix by remark lint * cross check autofix for this set of files * crosscheck more files * crosschecking and small fixes * crosscheck autofixed md files
Diffstat (limited to 'backends')
-rw-r--r--backends/README.md199
-rw-r--r--backends/WALKTHROUGH.md18
-rw-r--r--backends/aws_kinesis/README.md7
-rw-r--r--backends/mongodb/README.md5
-rw-r--r--backends/prometheus/README.md23
-rw-r--r--backends/prometheus/remote_write/README.md2
6 files changed, 133 insertions, 121 deletions
diff --git a/backends/README.md b/backends/README.md
index 10d457248f..f93e60f56b 100644
--- a/backends/README.md
+++ b/backends/README.md
@@ -14,65 +14,65 @@ X seconds (though, it can send them per second if you need it to).
## features
-1. Supported backends
+1. Supported backends
- - **graphite** (`plaintext interface`, used by **Graphite**, **InfluxDB**, **KairosDB**,
- **Blueflood**, **ElasticSearch** via logstash tcp input and the graphite codec, etc)
+ - **graphite** (`plaintext interface`, used by **Graphite**, **InfluxDB**, **KairosDB**,
+ **Blueflood**, **ElasticSearch** via logstash tcp input and the graphite codec, etc)
- metrics are sent to the backend server as `prefix.hostname.chart.dimension`. `prefix` is
- configured below, `hostname` is the hostname of the machine (can also be configured).
+ metrics are sent to the backend server as `prefix.hostname.chart.dimension`. `prefix` is
+ configured below, `hostname` is the hostname of the machine (can also be configured).
- - **opentsdb** (`telnet or HTTP interfaces`, used by **OpenTSDB**, **InfluxDB**, **KairosDB**, etc)
+ - **opentsdb** (`telnet or HTTP interfaces`, used by **OpenTSDB**, **InfluxDB**, **KairosDB**, etc)
- metrics are sent to opentsdb as `prefix.chart.dimension` with tag `host=hostname`.
+ metrics are sent to opentsdb as `prefix.chart.dimension` with tag `host=hostname`.
- - **json** document DBs
+ - **json** document DBs
- metrics are sent to a document db, `JSON` formatted.
+ metrics are sent to a document db, `JSON` formatted.
- - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
+ - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
- - **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
- **Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
- **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics**,
- and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage))
+ - **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
+ **Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
+ **PostgreSQL/TimescaleDB**, **Splunk**, **VictoriaMetrics**,
+ and a lot of other [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage))
- metrics are labeled in the format, which is used by Netdata for the [plaintext prometheus protocol](prometheus/).
- Notes on using the remote write backend are [here](prometheus/remote_write/).
+ metrics are labeled in the format, which is used by Netdata for the [plaintext prometheus protocol](prometheus/).
+ Notes on using the remote write backend are [here](prometheus/remote_write/).
- - **AWS Kinesis Data Streams**
+ - **AWS Kinesis Data Streams**
- metrics are sent to the service in `JSON` format.
+ metrics are sent to the service in `JSON` format.
- - **MongoDB**
+ - **MongoDB**
- metrics are sent to the database in `JSON` format.
+ metrics are sent to the database in `JSON` format.
-2. Only one backend may be active at a time.
+2. Only one backend may be active at a time.
-3. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
+3. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
-4. Netdata supports three modes of operation for all backends:
+4. Netdata supports three modes of operation for all backends:
- - `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
- So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
- For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
+ - `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
+ So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
+ For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
- - `average` sends to backends normalized metrics from the Netdata database.
- In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
- and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
- For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
- send the average percentage to the backend.
+ - `average` sends to backends normalized metrics from the Netdata database.
+ In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
+ and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
+ For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
+ send the average percentage to the backend.
- - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
- So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
- Netdata charts will be used.
+ - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
+ So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
+ Netdata charts will be used.
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
-5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
+5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
## configuration
@@ -96,25 +96,25 @@ of `netdata.conf` from your Netdata):
send names instead of ids = yes
```
-- `enabled = yes | no`, enables or disables sending data to a backend
+- `enabled = yes | no`, enables or disables sending data to a backend
-- `type = graphite | opentsdb:telnet | opentsdb:http | opentsdb:https | json | kinesis | mongodb`, selects the backend type
+- `type = graphite | opentsdb:telnet | opentsdb:http | opentsdb:https | json | kinesis | mongodb`, selects the backend type
-- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames,
- IPs (IPv4 and IPv6) and ports to connect to.
- Netdata will use the **first available** to send the metrics.
+- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames,
+ IPs (IPv4 and IPv6) and ports to connect to.
+ Netdata will use the **first available** to send the metrics.
- The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
+ The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
- `PROTOCOL` can be `udp` or `tcp`. `tcp` is the default and only supported by the current backends.
+ `PROTOCOL` can be `udp` or `tcp`. `tcp` is the default and only supported by the current backends.
- `IP` can be `XX.XX.XX.XX` (IPv4), or `[XX:XX...XX:XX]` (IPv6).
- For IPv6 you can to enclose the IP in `[]` to separate it from the port.
+ `IP` can be `XX.XX.XX.XX` (IPv4), or `[XX:XX...XX:XX]` (IPv6).
+ For IPv6 you can to enclose the IP in `[]` to separate it from the port.
- `PORT` can be a number of a service name. If omitted, the default port for the backend will be used
- (graphite = 2003, opentsdb = 4242).
+ `PORT` can be a number of a service name. If omitted, the default port for the backend will be used
+ (graphite = 2003, opentsdb = 4242).
- Example IPv4:
+ Example IPv4:
```
destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
@@ -139,71 +139,71 @@ of `netdata.conf` from your Netdata):
The MongoDB backend doesn't use the `destination` option for its configuration. It uses the `mongodb.conf`
[configuration file](mongodb/README.md) instead.
-- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of
- data that will be sent to the backend.
+- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of
+ data that will be sent to the backend.
-- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
- this is `[global].hostname`.
+- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
+ this is `[global].hostname`.
-- `prefix = Netdata`, is the prefix to add to all metrics.
+- `prefix = Netdata`, is the prefix to add to all metrics.
-- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
- some randomness to this number, to prevent stressing the backend server when many Netdata servers send
- data to the same backend. This randomness does not affect the quality of the data, only the time they
- are sent.
+- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
+ some randomness to this number, to prevent stressing the backend server when many Netdata servers send
+ data to the same backend. This randomness does not affect the quality of the data, only the time they
+ are sent.
-- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
- to buffer data, when the backend is not available. If the backend fails to receive the data after that
- many failures, data loss on the backend is expected (Netdata will also log it).
+- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
+ to buffer data, when the backend is not available. If the backend fails to receive the data after that
+ many failures, data loss on the backend is expected (Netdata will also log it).
-- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
- By default this is `2 * update_every * 1000`.
+- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
+ By default this is `2 * update_every * 1000`.
-- `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard
- (any number of times within each pattern). The patterns are checked against the hostname (the localhost
- is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
- this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
- negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
- `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
- or negative).
+- `send hosts matching = localhost *` includes one or more space separated patterns, using `*` as wildcard
+ (any number of times within each pattern). The patterns are checked against the hostname (the localhost
+ is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
+ this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with `!` gives a
+ negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
+ `!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
+ or negative).
-- `send charts matching = *` includes one or more space separated patterns, using ` * ` as wildcard (any
- number of times within each pattern). The patterns are checked against both chart id and chart name.
- A pattern starting with ` ! ` gives a negative match. So to match all charts named `apps.*`
- except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
- matching the chart id or the chart name will be used - positive or negative).
+- `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any
+ number of times within each pattern). The patterns are checked against both chart id and chart name.
+ A pattern starting with `!` gives a negative match. So to match all charts named `apps.*`
+ except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
+ matching the chart id or the chart name will be used - positive or negative).
-- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
- Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
- by the system and names are human friendly labels (also unique). Most charts and metrics have the same
- ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
- statsd synthetic charts, etc.
+- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
+ Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
+ by the system and names are human friendly labels (also unique). Most charts and metrics have the same
+ ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
+ statsd synthetic charts, etc.
-- `host tags = list of TAG=VALUE` defines tags that should be appended on all metrics for the given host.
- These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
- time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
- `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
- between Netdata servers).
+- `host tags = list of TAG=VALUE` defines tags that should be appended on all metrics for the given host.
+ These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
+ time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
+ `tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
+ between Netdata servers).
## monitoring operation
Netdata provides 5 charts:
-1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
- backend server.
+1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
+ backend server.
-2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
+2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
-3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
- If there was a re-connection involved, this includes the connection time.~~
- (this chart has been removed, because it only measures the time Netdata needs to give the data
- to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
- to measure this properly).
+3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
+ If there was a re-connection involved, this includes the connection time.~~
+ (this chart has been removed, because it only measures the time Netdata needs to give the data
+ to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
+ to measure this properly).
-4. **Backend operations**, the number of operations performed by Netdata.
+4. **Backend operations**, the number of operations performed by Netdata.
-5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
- for sending the metrics to the backend server.
+5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
+ for sending the metrics to the backend server.
![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
@@ -213,12 +213,11 @@ The latest version of the alarms configuration for monitoring the backend is [he
Netdata adds 4 alarms:
-1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
-2. `backend_metrics_sent`, percentage of metrics sent to the backend server
-3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
-4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
+1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
+2. `backend_metrics_sent`, percentage of metrics sent to the backend server
+3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
+4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).
![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/backends/WALKTHROUGH.md b/backends/WALKTHROUGH.md
index 50cd08b254..19f4ac0e17 100644
--- a/backends/WALKTHROUGH.md
+++ b/backends/WALKTHROUGH.md
@@ -1,6 +1,7 @@
# Netdata, Prometheus, Grafana stack
## Intro
+
In this article I will walk you through the basics of getting Netdata,
Prometheus and Grafana all working together and monitoring your application
servers. This article will be using docker on your local workstation. We will be
@@ -11,6 +12,7 @@ without cloud accounts or access to VMs can try this out and for it’s speed of
deployment.
## Why Netdata, Prometheus, and Grafana
+
Some time ago I was introduced to Netdata by a coworker. We were attempting to
troubleshoot python code which seemed to be bottlenecked. I was instantly
impressed by the amount of metrics Netdata exposes to you. I quickly added
@@ -40,6 +42,7 @@ together to create a modern monitoring stack. This stack will offer you
visibility into your application and systems performance.
## Getting Started - Netdata
+
To begin let’s create our container which we will install Netdata on. We need
to run a container, forward the necessary port that Netdata listens on, and
attach a tty so we can interact with the bash shell on the container. But
@@ -101,6 +104,7 @@ observing is “system”. You can begin to draw links between the charts in Net
to the prometheus metrics format in this manner.
## Prometheus
+
We will be installing prometheus in a container for purpose of demonstration.
While prometheus does have an official container I would like to walk through
the install process and setup on a fresh container. This will allow anyone
@@ -189,9 +193,11 @@ scrape_configs:
```
Let’s start prometheus once again by running `/opt/prometheus/prometheus`. If we
-now navigate to prometheus at ‘<http://localhost:9090/targets>’ we should see our
+
+now navigate to prometheus at ‘<http://localhost:9090/targets’> we should see our
+
target being successfully scraped. If we now go back to the Prometheus’s
-homepage and begin to type ‘netdata_’ Prometheus should auto complete metrics
+homepage and begin to type ‘netdata\_’ Prometheus should auto complete metrics
it is now scraping.
![](https://github.com/ldelossa/NetdataTutorial/raw/master/Screen%20Shot%202017-07-28%20at%205.13.43%20PM.png)
@@ -247,7 +253,7 @@ this point to read [this page](../backends/prometheus/#using-netdata-with-promet
The key point here is that NetData can export metrics from its internal DB or
can send metrics “as-collected” by specifying the ‘source=as-collected’ url
parameter like so.
-http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected
+<http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected>
If you choose to use this method you will need to use Prometheus's set of
functions here: <https://prometheus.io/docs/querying/functions/> to obtain useful
metrics as you are now dealing with raw counters from the system. For example
@@ -258,6 +264,7 @@ that. If you find limitations then consider re-writing your queries using the
raw data and using Prometheus functions to get the desired chart.
## Grafana
+
Finally we make it to grafana. This is the easiest part in my opinion. This time
we will actually run the official grafana docker container as all configuration
we need to do is done via the GUI. Let’s run the following command:
@@ -266,7 +273,8 @@ we need to do is done via the GUI. Let’s run the following command:
docker run -i -p 3000:3000 --network=netdata-tutorial grafana/grafana
```
-This will get grafana running at ‘<http://localhost:3000/>’ Let’s go there and
+This will get grafana running at ‘<http://localhost:3000/’> Let’s go there and
+
login using the credentials Admin:Admin.
The first thing we want to do is click ‘Add data source’. Let’s make it look
@@ -291,4 +299,4 @@ about the monitoring system until Prometheus cannot keep up with your scale.
Once this happens there are options presented in the Prometheus documentation
for solving this. Hope this was helpful, happy monitoring.
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FWALKTHROUGH&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2FWALKTHROUGH&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/backends/aws_kinesis/README.md b/backends/aws_kinesis/README.md
index b5726d2dbd..a6529f237f 100644
--- a/backends/aws_kinesis/README.md
+++ b/backends/aws_kinesis/README.md
@@ -13,15 +13,18 @@ cmake -DCMAKE_INSTALL_LIBDIR=/usr/lib -DCMAKE_INSTALL_INCLUDEDIR=/usr/include -D
## Configuration
To enable data sending to the kinesis backend set the following options in `netdata.conf`:
+
```
[backend]
enabled = yes
type = kinesis
destination = us-east-1
```
+
set the `destination` option to an AWS region.
In the Netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
+
```
# AWS credentials
aws_access_key_id = your_access_key_id
@@ -30,9 +33,9 @@ aws_secret_access_key = your_secret_access_key
# destination stream
stream name = your_stream_name
```
+
Alternatively, AWS credentials can be set for the *netdata* user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
A partition key for every record is computed automatically by Netdata with the purpose to distribute records across available shards evenly.
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/backends/mongodb/README.md b/backends/mongodb/README.md
index fc26dfe10e..7538fe8be3 100644
--- a/backends/mongodb/README.md
+++ b/backends/mongodb/README.md
@@ -7,6 +7,7 @@ To use MongoDB as a backend, `libmongoc` 1.7.0 or higher should be [installed](h
## Configuration
To enable data sending to the MongoDB backend set the following options in `netdata.conf`:
+
```
[backend]
enabled = yes
@@ -14,6 +15,7 @@ To enable data sending to the MongoDB backend set the following options in `netd
```
In the Netdata configuration directory run `./edit-config mongodb.conf` and set [MongoDB URI](https://docs.mongodb.com/manual/reference/connection-string/), database name, and collection name:
+
```
# URI
uri = mongodb://<hostname>
@@ -27,5 +29,4 @@ collection = your_collection_name
The default socket timeout depends on the backend update interval. The timeout is 500 ms shorter than the interval (but not less than 1000 ms). You can alter the timeout using the `sockettimeoutms` MongoDB URI option.
-
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fmongodb%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fmongodb%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/backends/prometheus/README.md b/backends/prometheus/README.md
index ddd729bf68..0a4be27e31 100644
--- a/backends/prometheus/README.md
+++ b/backends/prometheus/README.md
@@ -8,7 +8,7 @@ Prometheus is a distributed monitoring system which offers a very simple setup a
### Installing Netdata
-There are number of ways to install Netdata according to [Installation](../../packaging/installer/#installation)
+There are number of ways to install Netdata according to [Installation](../../packaging/installer/#installation)\
The suggested way of installing the latest Netdata and keep it upgrade automatically. Using one line installation:
```sh
@@ -173,6 +173,7 @@ ExecStop=/bin/kill -SIGINT $MAINPID
[Install]
WantedBy=multi-user.target
```
+
##### Start Prometheus
```sh
@@ -184,7 +185,7 @@ Prometheus should now start and listen on port 9090. Attempt to head there with
If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target.
----
+- - -
## Netdata support for prometheus
@@ -218,22 +219,22 @@ Netdata can send metrics to prometheus from 3 data sources:
- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by Netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
- The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
+ The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
+ If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
- Unlike prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
+ Unlike prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
- `average` - this data source uses the Netdata database to send the metrics to prometheus as they are presented on the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata dashboard charts. This is the easiest to work with.
- The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
+ The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- When this source is used, Netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes Netdata, it will get all the database data. To identify each prometheus server, Netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same Netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
+ When this source is used, Netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes Netdata, it will get all the database data. To identify each prometheus server, Netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same Netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
- The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- All the other operations are the same with `average`.
+ The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
+ All the other operations are the same with `average`.
Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
@@ -363,7 +364,7 @@ Netdata can filter the metrics it sends to prometheus with this setting:
send charts matching = *
```
-This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use ` * ` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with ` ! ` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
+This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use `*` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with `!` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
### Changing the prefix of Netdata metrics
@@ -390,4 +391,4 @@ When the data source is set to `average` or `sum`, Netdata remembers the last ac
To uniquely identify each prometheus server, Netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing Netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by Netdata to uniquely identify each prometheus server and keep track of its last access time.
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/backends/prometheus/remote_write/README.md b/backends/prometheus/remote_write/README.md
index 2baa00fa09..8af6f4d1d1 100644
--- a/backends/prometheus/remote_write/README.md
+++ b/backends/prometheus/remote_write/README.md
@@ -27,4 +27,4 @@ The default value is `/receive`. `remote write URL path` is used to set an endpo
The remote write backend does not support `buffer on failures`
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2Fremote_write%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Fprometheus%2Fremote_write%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)