summaryrefslogtreecommitdiffstats
path: root/exporting
diff options
context:
space:
mode:
authorTasos Katsoulas <12612986+tkatsoulas@users.noreply.github.com>2023-02-02 15:23:54 +0200
committerGitHub <noreply@github.com>2023-02-02 15:23:54 +0200
commit9f1403de7d3ea2633768d34095afcf880c7c4e2d (patch)
tree0c50a1f42b3e182f6cd5de4e92c609cc76fd3cb5 /exporting
parentcaf18920aac38eed782647957e82c0ab7f64ec17 (diff)
Covert our documentation links to GH absolute links (#14344)
Signed-off-by: Tasos Katsoulas <tasos@netdata.cloud>
Diffstat (limited to 'exporting')
-rw-r--r--exporting/README.md28
-rw-r--r--exporting/WALKTHROUGH.md4
-rw-r--r--exporting/aws_kinesis/README.md3
-rw-r--r--exporting/graphite/README.md12
-rw-r--r--exporting/json/README.md6
-rw-r--r--exporting/mongodb/README.md5
-rw-r--r--exporting/opentsdb/README.md12
-rw-r--r--exporting/prometheus/README.md100
-rw-r--r--exporting/prometheus/remote_write/README.md2
9 files changed, 91 insertions, 81 deletions
diff --git a/exporting/README.md b/exporting/README.md
index 48aefbcea8..bc3ca1c7df 100644
--- a/exporting/README.md
+++ b/exporting/README.md
@@ -16,13 +16,13 @@ configuring, and monitoring Netdata's exporting engine, which allows you to send
databases.
For a quick introduction to the exporting engine's features, read our doc on [exporting metrics to time-series
-databases](/docs/export/external-databases.md), or jump in to [enabling a connector](/docs/export/enable-connector.md).
+databases](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md), or jump in to [enabling a connector](https://github.com/netdata/netdata/blob/master/docs/export/enable-connector.md).
The exporting engine has a modular structure and supports metric exporting via multiple exporting connector instances at
the same time. You can have different update intervals and filters configured for every exporting connector instance.
When you enable the exporting engine and a connector, the Netdata Agent exports metrics _beginning from the time you
-restart its process_, not the entire [database of long-term metrics](/docs/store/change-metrics-storage.md).
+restart its process_, not the entire [database of long-term metrics](https://github.com/netdata/netdata/blob/master/docs/store/change-metrics-storage.md).
Since Netdata collects thousands of metrics per server per second, which would easily congest any database server when
several Netdata servers are sending data to it, Netdata allows sending metrics at a lower frequency, by resampling them.
@@ -35,27 +35,27 @@ X seconds (though, it can send them per second if you need it to).
### Integration
The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our
-[list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which
+[list of supported databases](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md#supported-databases) for information on which
connector to enable and configure for your database of choice.
-- [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
+- [**AWS Kinesis Data Streams**](https://github.com/netdata/netdata/blob/master/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
format.
-- [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
+- [**Google Cloud Pub/Sub Service**](https://github.com/netdata/netdata/blob/master/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
format.
-- [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
+- [**Graphite**](https://github.com/netdata/netdata/blob/master/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
`prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can
also be configured). Learn more in our guide to [export and visualize Netdata metrics in
- Graphite](/docs/guides/export/export-netdata-metrics-graphite.md).
-- [**JSON** document databases](/exporting/json/README.md)
-- [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
+ Graphite](https://github.com/netdata/netdata/blob/master/docs/guides/export/export-netdata-metrics-graphite.md).
+- [**JSON** document databases](https://github.com/netdata/netdata/blob/master/exporting/json/README.md)
+- [**OpenTSDB**](https://github.com/netdata/netdata/blob/master/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`.
-- [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
-- [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
+- [**MongoDB**](https://github.com/netdata/netdata/blob/master/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
+- [**Prometheus**](https://github.com/netdata/netdata/blob/master/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
from node using the Netdata API.
-- [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
+- [**Prometheus remote write**](https://github.com/netdata/netdata/blob/master/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
buffer encoding over HTTP. Supports many [storage
providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
-- [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
+- [**TimescaleDB**](https://github.com/netdata/netdata/blob/master/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
Netdata client and writes them to a TimescaleDB table.
### Chart filtering
@@ -296,7 +296,7 @@ Configure individual connectors and override any global settings with the follow
Netdata can send metrics to external databases using the TLS/SSL protocol. Unfortunately, some of
them does not support encrypted connections, so you will have to configure a reverse proxy to enable
HTTPS communication between Netdata and an external database. You can set up a reverse proxy with
-[Nginx](/docs/Running-behind-nginx.md).
+[Nginx](https://github.com/netdata/netdata/blob/master/docs/Running-behind-nginx.md).
## Exporting engine monitoring
diff --git a/exporting/WALKTHROUGH.md b/exporting/WALKTHROUGH.md
index 52cbb8ec4c..5afd260452 100644
--- a/exporting/WALKTHROUGH.md
+++ b/exporting/WALKTHROUGH.md
@@ -67,7 +67,7 @@ command to run (`/bin/bash`) and then chooses the base container images (`centos
be sitting inside the shell of the container.
After we have entered the shell we can install Netdata. This process could not be easier. If you take a look at [this
-link](/packaging/installer/README.md), the Netdata devs give us several one-liners to install Netdata. I have not had
+link](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md), the Netdata devs give us several one-liners to install Netdata. I have not had
any issues with these one liners and their bootstrapping scripts so far (If you guys run into anything do share). Run
the following command in your container.
@@ -226,7 +226,7 @@ the `chart` dimension. If you'd like you can combine the `chart` and `instance`
Let's give this a try: `netdata_system_cpu_percentage_average{chart="system.cpu", instance="netdata:19999"}`
This is the basics of using Prometheus to query Netdata. I'd advise everyone at this point to read [this
-page](/exporting/prometheus/README.md#using-netdata-with-prometheus). The key point here is that Netdata can export metrics from
+page](https://github.com/netdata/netdata/blob/master/exporting/prometheus/README.md#using-netdata-with-prometheus). The key point here is that Netdata can export metrics from
its internal DB or can send metrics _as-collected_ by specifying the `source=as-collected` URL parameter like so.
<http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected> If you choose to use
this method you will need to use Prometheus's set of functions here: <https://prometheus.io/docs/querying/functions/> to
diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md
index 0aff906efc..7921a26545 100644
--- a/exporting/aws_kinesis/README.md
+++ b/exporting/aws_kinesis/README.md
@@ -54,7 +54,8 @@ Set AWS credentials and stream name:
stream name = your_stream_name
```
-Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
+Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for
+C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
Netdata automatically computes a partition key for every record with the purpose to distribute records across
available shards evenly.
diff --git a/exporting/graphite/README.md b/exporting/graphite/README.md
index faf79b2365..afcdf79845 100644
--- a/exporting/graphite/README.md
+++ b/exporting/graphite/README.md
@@ -11,8 +11,9 @@ learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}"
# Export metrics to Graphite providers
-You can use the Graphite connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to
-Graphite providers for long-term storage, further analysis, or correlation with data from other sources.
+You can use the Graphite connector for
+the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's
+metrics to Graphite providers for long-term storage, further analysis, or correlation with data from other sources.
## Configuration
@@ -25,7 +26,8 @@ directory and set the following options:
destination = localhost:2003
```
-Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For example: `graphite:http:my_graphite_instance`,
+Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For
+example: `graphite:http:my_graphite_instance`,
`graphite:https:my_graphite_instance`. You can set basic HTTP authentication credentials using
```conf
@@ -33,7 +35,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other
password = my_password
```
-The Graphite connector is further configurable using additional settings. See the [exporting reference
-doc](/exporting/README.md#options) for details.
+The Graphite connector is further configurable using additional settings. See
+the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details.
diff --git a/exporting/json/README.md b/exporting/json/README.md
index bd94d7f78f..23ff555cb9 100644
--- a/exporting/json/README.md
+++ b/exporting/json/README.md
@@ -11,7 +11,7 @@ learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}"
# Export metrics to JSON document databases
-You can use the JSON connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to JSON
+You can use the JSON connector for the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's metrics to JSON
document databases for long-term storage, further analysis, or correlation with data from other sources.
## Configuration
@@ -33,7 +33,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other
password = my_password
```
-The JSON connector is further configurable using additional settings. See the [exporting reference
-doc](/exporting/README.md#options) for details.
+The JSON connector is further configurable using additional settings. See
+the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details.
diff --git a/exporting/mongodb/README.md b/exporting/mongodb/README.md
index 692fbed487..0cbe8f0598 100644
--- a/exporting/mongodb/README.md
+++ b/exporting/mongodb/README.md
@@ -11,8 +11,9 @@ learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}"
# Export metrics to MongoDB
-You can use the MongoDB connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to a
-MongoDB database for long-term storage, further analysis, or correlation with data from other sources.
+You can use the MongoDB connector for
+the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's
+metrics to a MongoDB database for long-term storage, further analysis, or correlation with data from other sources.
## Prerequisites
diff --git a/exporting/opentsdb/README.md b/exporting/opentsdb/README.md
index 5eba6fc03e..c6069f372a 100644
--- a/exporting/opentsdb/README.md
+++ b/exporting/opentsdb/README.md
@@ -11,8 +11,9 @@ learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}"
# Export metrics to OpenTSDB
-You can use the OpenTSDB connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to OpenTSDB
-databases for long-term storage, further analysis, or correlation with data from other sources.
+You can use the OpenTSDB connector for
+the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's
+metrics to OpenTSDB databases for long-term storage, further analysis, or correlation with data from other sources.
## Configuration
@@ -25,7 +26,8 @@ directory and set the following options:
destination = localhost:4242
```
-Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For example: `opentsdb:http:my_opentsdb_instance`,
+Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For
+example: `opentsdb:http:my_opentsdb_instance`,
`opentsdb:https:my_opentsdb_instance`. You can set basic HTTP authentication credentials using
```conf
@@ -33,7 +35,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other
password = my_password
```
-The OpenTSDB connector is further configurable using additional settings. See the [exporting reference
-doc](/exporting/README.md#options) for details.
+The OpenTSDB connector is further configurable using additional settings. See
+the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details.
diff --git a/exporting/prometheus/README.md b/exporting/prometheus/README.md
index 4d8ff78a36..97e9c632f9 100644
--- a/exporting/prometheus/README.md
+++ b/exporting/prometheus/README.md
@@ -22,7 +22,8 @@ are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM
### Installing Netdata
-There are number of ways to install Netdata according to [Installation](/packaging/installer/README.md). The suggested way
+There are number of ways to install Netdata according to
+[Installation](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md). The suggested way
of installing the latest Netdata and keep it upgrade automatically.
<!-- candidate for reuse -->
@@ -82,24 +83,24 @@ sudo tar -xvf /tmp/prometheus-*linux-amd64.tar.gz -C /opt/prometheus --strip=1
We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`.
-Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata.
+Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata.
```yaml
# my global config
global:
- scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
+ scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
- monitor: 'codelab-monitor'
+ monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
- # - "first.rules"
- # - "second.rules"
+# - "first.rules"
+# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
@@ -111,7 +112,7 @@ scrape_configs:
# scheme defaults to 'http'.
static_configs:
- - targets: ['0.0.0.0:9090']
+ - targets: [ '0.0.0.0:9090' ]
- job_name: 'netdata-scrape'
@@ -119,7 +120,7 @@ scrape_configs:
params:
# format: prometheus | prometheus_all_hosts
# You can use `prometheus_all_hosts` if you want Prometheus to set the `instance` to your hostname instead of IP
- format: [prometheus]
+ format: [ prometheus ]
#
# sources: as-collected | raw | average | sum | volume
# default is: average
@@ -131,7 +132,7 @@ scrape_configs:
honor_labels: true
static_configs:
- - targets: ['{your.netdata.ip}:19999']
+ - targets: [ '{your.netdata.ip}:19999' ]
```
#### Install nodes.yml
@@ -207,7 +208,7 @@ sudo systemctl start prometheus
sudo systemctl enable prometheus
```
-Prometheus should now start and listen on port 9090. Attempt to head there with your browser.
+Prometheus should now start and listen on port 9090. Attempt to head there with your browser.
If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click
this and click on 'targets' We should see the Netdata host as a scraped target.
@@ -224,16 +225,16 @@ Before explaining the changes, we have to understand the key differences between
Each chart in Netdata has several properties (common to all its metrics):
-- `chart_id` - uniquely identifies a chart.
+- `chart_id` - uniquely identifies a chart.
-- `chart_name` - a more human friendly name for `chart_id`, also unique.
+- `chart_name` - a more human friendly name for `chart_id`, also unique.
-- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts
- have the same context, etc. This is used for alarm templates to match all the charts they should be attached to.
+- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts
+ have the same context, etc. This is used for alarm templates to match all the charts they should be attached to.
-- `family` groups a set of charts together. It is used as the submenu of the dashboard.
+- `family` groups a set of charts together. It is used as the submenu of the dashboard.
-- `units` is the units for all the metrics attached to the chart.
+- `units` is the units for all the metrics attached to the chart.
#### dimensions
@@ -245,44 +246,44 @@ they are both in the same chart).
Netdata can send metrics to Prometheus from 3 data sources:
-- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is
- done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by
- Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how
- to get meaningful values out of them.
+- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is
+ done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by
+ Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how
+ to get meaningful values out of them.
- The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
+ The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
+ If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
- Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants
- (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this
- format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
+ Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants
+ (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this
+ format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
-- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on
- the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata
- dashboard charts. This is the easiest to work with.
+- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on
+ the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata
+ dashboard charts. This is the easiest to work with.
- The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
+ The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the
- metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the
- time-frame the `average` will be calculated.
+ When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the
+ metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the
+ time-frame the `average` will be calculated.
- So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data.
- To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics.
-
- If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus
- server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server.
+ So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data.
+ To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics.
-- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
+ If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus
+ server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server.
- The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the
- other operations are the same with `average`.
+- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
- To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL.
- e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected`
+ The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the
+ other operations are the same with `average`.
- Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
+ To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL.
+ e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected`
+
+ Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
### Querying Metrics
@@ -369,7 +370,7 @@ functionality of Netdata this ignores any upstream hosts - so you should conside
```yaml
metrics_path: '/api/v1/allmetrics'
params:
- format: [prometheus_all_hosts]
+ format: [ prometheus_all_hosts ]
honor_labels: true
```
@@ -394,7 +395,9 @@ To save bandwidth, and because Prometheus does not use them anyway, `# TYPE` and
wanted they can be re-enabled via `types=yes` and `help=yes`, e.g.
`/api/v1/allmetrics?format=prometheus&types=yes&help=yes`
-Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against the Prometheus documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information).
+Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against
+the Prometheus
+documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information).
### Names and IDs
@@ -413,8 +416,8 @@ The default is controlled in `exporting.conf`:
You can overwrite it from Prometheus, by appending to the URL:
-- `&names=no` to get IDs (the old behaviour)
-- `&names=yes` to get names
+- `&names=no` to get IDs (the old behaviour)
+- `&names=yes` to get names
### Filtering metrics sent to Prometheus
@@ -425,7 +428,8 @@ Netdata can filter the metrics it sends to Prometheus with this setting:
send charts matching = *
```
-This settings accepts a space separated list of [simple patterns](/libnetdata/simple_pattern/README.md) to match the
+This settings accepts a space separated list
+of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) to match the
**charts** to be sent to Prometheus. Each pattern can use `*` as wildcard, any number of times (e.g `*a*b*c*` is valid).
Patterns starting with `!` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups
except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is
diff --git a/exporting/prometheus/remote_write/README.md b/exporting/prometheus/remote_write/README.md
index 22f91237fc..9bda02d49c 100644
--- a/exporting/prometheus/remote_write/README.md
+++ b/exporting/prometheus/remote_write/README.md
@@ -18,7 +18,7 @@ than 20 external storage providers for long-term archiving and further analysis.
To use the Prometheus remote write API with [storage
providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), install
[protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries.
-Next, [reinstall Netdata](/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities
+Next, [reinstall Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities
are now available.
## Configuration