summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--DOCUMENTATION.md5
-rw-r--r--README.md44
-rw-r--r--daemon/README.md6
-rw-r--r--daemon/config/README.md7
-rw-r--r--database/README.md54
-rw-r--r--docs/Performance.md37
-rw-r--r--docs/configuration-guide.md8
-rw-r--r--docs/getting-started.md44
-rw-r--r--docs/step-by-step/step-00.md5
-rw-r--r--docs/step-by-step/step-01.md15
-rw-r--r--docs/step-by-step/step-04.md4
-rw-r--r--docs/tutorials/longer-metrics-storage.md7
12 files changed, 114 insertions, 122 deletions
diff --git a/DOCUMENTATION.md b/DOCUMENTATION.md
index 69b14271bf..198660c588 100644
--- a/DOCUMENTATION.md
+++ b/DOCUMENTATION.md
@@ -2,7 +2,8 @@
**Netdata is real-time health monitoring and performance troubleshooting for systems and applications.** It helps you
instantly diagnose slowdowns and anomalies in your infrastructure with thousands of metrics, interactive visualizations,
-and insightful health alarms.
+and insightful health alarms. Plus, long-term storage comes ready out-of-the-box, so can collect, monitor, and maintain
+your metrics in one insightful place.
## Navigating the Netdata documentation
@@ -56,6 +57,8 @@ Add as many nodes as you'd like!
**Advanced users**: For those who already understand how to access a Netdata dashboard and perform basic configuration,
feel free to see what's behind any of these other doors.
+- [Tutorial: Change how long Netdata stores metrics](docs/tutorials/longer-metrics-storage.md): Extend Netdata's
+ long-term metrics storage database by allowing Netdata to use more of your system's RAM and disk.
- [Netdata Behind Nginx](docs/Running-behind-nginx.md): Use an Nginx web server instead of Netdata's built-in server
to enable TLS, HTTPS, and basic authentication.
- [Add More Charts](docs/Add-more-charts-to-netdata.md): Enable new internal or external plugins and understand when
diff --git a/README.md b/README.md
index 3f11414e01..a0c7c6b286 100644
--- a/README.md
+++ b/README.md
@@ -13,15 +13,17 @@ PYTHON](https://img.shields.io/lgtm/grade/python/g/netdata/netdata.svg?logo=lgtm
---
-**Netdata** is **distributed, real-time, performance and health monitoring for systems and applications**. It is a
+Netdata is **distributed, real-time performance and health monitoring** for systems and applications. It is a
highly-optimized monitoring agent you install on all your systems and containers.
-Netdata provides **unparalleled insights**, **in real-time**, of everything happening on the systems it runs (including
-web servers, databases, applications), using **highly interactive web dashboards**. It can run autonomously, without any
-third-party components, or it can be integrated to existing monitoring toolchains (Prometheus, Graphite, OpenTSDB,
-Kafka, Grafana, and more).
+Netdata provides **unparalleled insights**, in **real-time**, of everything happening on the systems it's running on
+(including web servers, databases, applications), using **highly interactive web dashboards**.
-Netdata is **fast** and **efficient**, designed to permanently run on all systems (**physical** & **virtual** servers,
+A highly-efficient database **stores long-term historical metrics for days, weeks, or months**, all at 1-second
+granularity. Run this long-term storage autonomously, or integrate Netdata with your existing monitoring toolchains
+(Prometheus, Graphite, OpenTSDB, Kafka, Grafana, and more).
+
+Netdata is **fast** and **efficient**, designed to permanently run on all systems (**physical** and **virtual** servers,
**containers**, **IoT** devices), without disrupting their core function.
Netdata is **free, open-source software** and it currently runs on **Linux**, **FreeBSD**, and **MacOS**, along with
@@ -171,25 +173,26 @@ Netdata is a monitoring agent you install on all your systems. It is:
- A **metrics collector** for system and application metrics (including web servers, databases, containers, and much
more),
-- A custom **database engine** to store recent metrics in memory and "spill" historical metrics to disk for efficient
- long-term storage,
+- A **long-term metrics database** that stores recent metrics in memory and "spills" historical metrics to disk for
+ efficient long-term storage,
- A super fast, interactive, and modern **metrics visualizer** optimized for anomaly detection,
-- And an **alarms notification engine** - an advanced watchdog for detecting performance and availability issues
+- And an **alarms notification engine** for detecting performance and availability issues.
All the above, are packaged together in a very flexible, extremely modular, distributed application.
This is how Netdata compares to other monitoring solutions:
-| Netdata | others (open-source and commercial) |
-| :-------------------------------------------------------------- | :----------------------------------------------------- |
-| **High resolution metrics** (1s granularity) | Low resolution metrics (10s granularity at best) |
-| Monitors everything, **thousands of metrics per node** | Monitor just a few metrics |
-| UI is super fast, optimized for **anomaly detection** | UI is good for just an abstract view |
-| **Meaningful presentation**, to help you understand the metrics | You have to know the metrics before you start |
-| Install and get results **immediately** | Long preparation is required to get any useful results |
-| Use it for **troubleshooting** performance problems | Use them to get _statistics of past performance_ |
-| **Kills the console** for tracing performance issues | The console is always required for troubleshooting |
-| Requires **zero dedicated resources** | Require large dedicated resources |
+| Netdata | others (open-source and commercial) |
+| :-------------------------------------------------------------- | :--------------------------------------------------------------- |
+| **High resolution metrics** (1s granularity) | Low resolution metrics (10s granularity at best) |
+| Monitors everything, **thousands of metrics per node** | Monitor just a few metrics |
+| UI is super fast, optimized for **anomaly detection** | UI is good for just an abstract view |
+| **Long-term, autonomous storage** at one-second granularity | Centralized metrics in an expensive data lake at 10s granularity |
+| **Meaningful presentation**, to help you understand the metrics | You have to know the metrics before you start |
+| Install and get results **immediately** | Long preparation is required to get any useful results |
+| Use it for **troubleshooting** performance problems | Use them to get _statistics of past performance_ |
+| **Kills the console** for tracing performance issues | The console is always required for troubleshooting |
+| Requires **zero dedicated resources** | Require large dedicated resources |
Netdata is **open-source**, **free**, super **fast**, very **easy**, completely **open**, extremely **efficient**,
**flexible** and integrate-able.
@@ -282,7 +285,8 @@ This is what you should expect from Netdata:
- **1s granularity** - The highest possible resolution for all metrics.
- **Unlimited metrics** - Netdata collects all the available metrics—the more, the better.
- **1% CPU utilization of a single core** - It's unbelievably optimized.
-- **A few MB of RAM** - The low-memory round-robin option uses 25MB RAM, and you can [resize it](database/).
+- **A few MB of RAM** - The highly-efficient database engine stores per-second metrics in RAM and then "spills"
+ historical metrics to disk long-term storage.
- **Minimal disk I/O** - While running, Netdata only writes historical metrics and reads `error` and `access` logs.
- **Zero configuration** - Netdata auto-detects everything, and can collect up to 10,000 metrics per server out of the
box.
diff --git a/daemon/README.md b/daemon/README.md
index 482265548a..877efd18c2 100644
--- a/daemon/README.md
+++ b/daemon/README.md
@@ -4,9 +4,9 @@
- You can start Netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
-- You can stop Netdata by killing it with `killall netdata`. You can stop and start Netdata at any point. Netdata
- saves on exit its round robbin database to `/var/cache/netdata` so that it will continue from where it stopped the
- last time.
+- You can stop Netdata by killing it with `killall netdata`. You can stop and start Netdata at any point. When
+ exiting, the [database engine](../database/engine/README.md) saves metrics to `/var/cache/netdata/dbengine/` so that
+ it can continue when started again.
Access to the web site, for all graphs, is by default on port `19999`, so go to:
diff --git a/daemon/config/README.md b/daemon/config/README.md
index 720a58f001..4f63f31085 100644
--- a/daemon/config/README.md
+++ b/daemon/config/README.md
@@ -46,7 +46,7 @@ Please note that your data history will be lost if you have modified `history` p
| glibc malloc arena max for plugins|`1`|See [Virtual memory](../#virtual-memory).|||
| glibc malloc arena max for Netdata|`1`|See [Virtual memory](../#virtual-memory).|||
| hostname|auto-detected|The hostname of the computer running Netdata.|||
-| history|`3996`|The number of entries the `netdata` daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/#database) for more information.|||
+| history|`3996`| Used with `memory mode = save/map/ram/alloc`, not the default `memory mode = dbengine`. This number reflects the number of entries the `netdata` daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/README.md#database) for more information. |||
| update every|`1`|The frequency in seconds, for data collection. For more information see [Performance](../../docs/Performance.md#performance).|||
| config directory|`/etc/netdata`|The directory configuration files are kept.|||
| stock config directory|`/usr/lib/netdata/conf.d`||||
@@ -56,7 +56,10 @@ Please note that your data history will be lost if you have modified `history` p
| lib directory|`/var/lib/netdata`|Contains the alarm log and the Netdata instance guid.|||
| home directory|`/var/cache/netdata`|Contains the db files for the collected metrics|||
| plugins directory|`"/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"`|The directory plugin programs are kept. This setting supports multiple directories, space separated. If any directory path contains spaces, enclose it in single or double quotes.|||
-| memory mode|`save`|When set to `save` Netdata will save its round robin database on exit and load it on startup. When set to `map` the cache files will be updated in real time (check `man mmap` - do not set this on systems with heavy load or slow disks - the disks will continuously sync the in-memory database of Netdata). When set to `dbengine` it behaves similarly to `map` but with much better disk and memory efficiency, however, with higher overhead. When set to `ram` the round robin database will be temporary and it will be lost when Netdata exits. `none` disables the database at this host. This also disables health monitoring (there cannot be health monitoring without a database). host access prefix||This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).|
+| memory mode | `dbengine` | `dbengine`: The default for long-term metrics storage with efficient RAM and disk usage. Can be extended with `page cache size` and `dbengine disk space`. <br />`save`: Netdata will save its round robin database on exit and load it on startup. <br />`map`: Cache files will be updated in real-time. Not ideal for systems with high load or slow disks (check `man mmap`). <br />`ram`: The round-robin database will be temporary and it will be lost when Netdata exits. <br />`none`: Disables the database at this host, and disables health monitoring entirely, as that requires a database of metrics. |
+| page cache size | 32 | Determines the amount of RAM in MiB that is dedicated to caching Netdata metric values. |||
+| dbengine disk space | 256 | Determines the amount of disk space in MiB that is dedicated to storing Netdata metric values and all related metadata describing them |||
+| host access prefix||This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).|
| memory deduplication (ksm)|`yes`|When set to `yes`, Netdata will offer its in-memory round robin database to kernel same page merging (KSM) for deduplication. For more information check [Memory Deduplication - Kernel Same Page Merging - KSM](../../database/#ksm)|||
| TZ environment variable|`:/etc/localtime`|Where to find the timezone|||
| timezone|auto-detected|The timezone retrieved from the environment variable|||
diff --git a/database/README.md b/database/README.md
index 91a09e9850..8a08ef2380 100644
--- a/database/README.md
+++ b/database/README.md
@@ -1,60 +1,40 @@
# Database
-Although `netdata` does all its calculations using `long double`, it stores all values using a [custom-made 32-bit
-number](../libnetdata/storage_number/).
+Netdata is fully capable of long-term metrics storage, at per-second granularity, via its default database engine
+(`dbengine`). But to remain as flexible as possible, Netdata supports a number of types of metrics storage:
-So, for each dimension of a chart, Netdata will need: `4 bytes for the value * the entries of its history`. It will not
-store any other data for each value in the time series database. Since all its values are stored in a time series with
-fixed step, the time each value corresponds can be calculated at run time, using the position of a value in the round
-robin database.
-
-The default history is 3.600 entries, thus it will need 14.4KB for each chart dimension. If you need 1.000 dimensions,
-they will occupy just 14.4MB.
-
-Of course, 3.600 entries is a very short history, especially if data collection frequency is set to 1 second. You will
-have just one hour of data.
-
-For a day of data and 1.000 dimensions, you will need: `86.400 seconds * 4 bytes * 1.000 dimensions = 345MB of RAM`.
-
-One option you have to lower this number is to use **[Memory Deduplication - Kernel Same Page Merging - KSM](#ksm)**.
-Another possibility is to use the **[Database Engine](engine/)**.
-
-## Memory modes
-
-Currently Netdata supports 6 memory modes:
+1. `dbengine`, (the default) data are in database files. The [Database Engine](engine/) works like a traditional
+ database. There is some amount of RAM dedicated to data caching and indexing and the rest of the data reside
+ compressed on disk. The number of history entries is not fixed in this case, but depends on the configured disk
+ space and the effective compression ratio of the data stored. This is the **only mode** that supports changing the
+ data collection update frequency (`update_every`) **without losing** the previously stored metrics. For more details
+ see [here](engine/).
-1. `ram`, data are purely in memory. Data are never saved on disk. This mode uses `mmap()` and supports [KSM](#ksm).
+2. `ram`, data are purely in memory. Data are never saved on disk. This mode uses `mmap()` and supports [KSM](#ksm).
-2. `save`, data are only in RAM while Netdata runs and are saved to / loaded from disk on Netdata
+3. `save`, data are only in RAM while Netdata runs and are saved to / loaded from disk on Netdata
restart. It also uses `mmap()` and supports [KSM](#ksm).
-3. `map`, data are in memory mapped files. This works like the swap. Keep in mind though, this will have a constant
+4. `map`, data are in memory mapped files. This works like the swap. Keep in mind though, this will have a constant
write on your disk. When Netdata writes data on its memory, the Linux kernel marks the related memory pages as dirty
and automatically starts updating them on disk. Unfortunately we cannot control how frequently this works. The Linux
kernel uses exactly the same algorithm it uses for its swap memory. Check below for additional information on
running a dedicated central Netdata server. This mode uses `mmap()` but does not support [KSM](#ksm).
-4. `none`, without a database (collected metrics can only be streamed to another Netdata).
+5. `none`, without a database (collected metrics can only be streamed to another Netdata).
-5. `alloc`, like `ram` but it uses `calloc()` and does not support [KSM](#ksm). This mode is the fallback for all
+6. `alloc`, like `ram` but it uses `calloc()` and does not support [KSM](#ksm). This mode is the fallback for all
others except `none`.
-6. `dbengine`, (the default) data are in database files. The [Database Engine](engine/) works like a traditional
- database. There is some amount of RAM dedicated to data caching and indexing and the rest of the data reside
- compressed on disk. The number of history entries is not fixed in this case, but depends on the configured disk
- space and the effective compression ratio of the data stored. This is the **only mode** that supports changing the
- data collection update frequency (`update_every`) **without losing** the previously stored metrics. For more details
- see [here](engine/).
-
You can select the memory mode by editing `netdata.conf` and setting:
```conf
[global]
- # dbengine (default), ram, save (the default if dbengine not available), map (swap like), none, alloc
- memory mode = dbengine
+ # dbengine (default), ram, save (the default if dbengine not available), map (swap like), none, alloc
+ memory mode = dbengine
- # the directory where data are saved
- cache directory = /var/cache/netdata
+ # the directory where data are saved
+ cache directory = /var/cache/netdata
```
## Running Netdata in embedded devices
diff --git a/docs/Performance.md b/docs/Performance.md
index 8205c70ee0..435de0f820 100644
--- a/docs/Performance.md
+++ b/docs/Performance.md
@@ -45,11 +45,15 @@ Netdata runs with the lowest possible process priority, so even if 1000 users ar
To lower the CPU utilization of Netdata when clients are accessing the dashboard, set `web compression level = 1`, or disable web compression completely by setting `enable web responses gzip compression = no`. Both settings are in the `[web]` section.
-## Monitoring a heavy loaded system
+## Monitoring a heavily-loaded system
-Netdata, while running, does not depend on disk I/O (apart its log files and `access.log` is written with buffering enabled and can be disabled). Some plugins that need disk may stop and show gaps during heavy system load, but the Netdata daemon itself should be able to work and collect values from `/proc` and `/sys` and serve web clients accessing it.
+While running, Netdata does not depend much on disk I/O aside from writing to log files and the [database
+engine](../database/engine/README.md) "spilling" historical metrics to disk when it uses all its available RAM.
-Keep in mind that Netdata saves its database when it exits and loads it back when restarted. While it is running though, its DB is only stored in RAM and no I/O takes place for it.
+Under a heavy system load, plugins that need disk may stop and show gaps during heavy system load, but the Netdata
+daemon itself should be able to work and collect values from `/proc` and `/sys` and serve web clients accessing it.
+
+Keep in mind that Netdata saves its database when it exits, and loads it up again when started.
## Netdata process priority
@@ -173,29 +177,22 @@ Normally, you will not need them. To disable them, set:
access log = none
```
-### 5. Set memory mode to RAM
+### 5. Lower Netdata's memory usage
-Setting the memory mode to `ram` will disable loading and saving the round robin database. This will not affect anything while running Netdata, but it might be required if you have very limited storage available.
+You can change the amount of RAM and disk the database engine uses for all charts and their dimensions with the
+following settings in the `[global]` section of `netdata.conf`:
-```
+```conf
[global]
- memory mode = ram
+ # memory mode = dbengine
+ # page cache size = 32
+ # dbengine disk space = 256
```
-### 6. Lower memory requirements
-
-You can set the default size of the round robin database for all charts, using:
-
-```
-[global]
- history = 600
-```
-
-The units for history is `[global].update every` seconds. So if `[global].update every = 6` and `[global].history = 600`, you will have an hour of data ( 6 x 600 = 3.600 ), which will store 600 points per dimension, one every 6 seconds.
-
-Check also [Database](../database) for directions on calculating the size of the round robin database.
+See the [database engine documentation](../database/engine/README.md) or our [tutorial on metrics
+retention](tutorials/longer-metrics-storage.md) for more details on lowering the database engine's memory requirements.
-### 7. Disable gzip compression of responses
+### 6. Disable gzip compression of responses
Gzip compression of the web responses is using more CPU that the rest of Netdata. You can lower the compression level or disable gzip compression completely. You can disable it, like this:
diff --git a/docs/configuration-guide.md b/docs/configuration-guide.md
index 51361b6b76..0c6d934e49 100644
--- a/docs/configuration-guide.md
+++ b/docs/configuration-guide.md
@@ -53,10 +53,12 @@ it there.
### Change what I see
-#### Increase the metrics retention period
+#### Increase the long-term metrics retention period
-Increase `history` in [netdata.conf \[global\]](../daemon/config/README.md#global-section-options). Just ensure you
-understand [how much memory will be required](../database/).
+Increase the values for the `page cache size` and `dbengine disk space` settings in the [`[global]`
+section](../daemon/config/README.md#global-section-options) of `netdata.conf`. Read our tutorial on [increasing
+long-term metrics storage](tutorials/longer-metrics-storage.md) and the [memory requirements for the database
+engine](../database/engine/README.md#memory-requirements).
#### Reduce the data collection frequency
diff --git a/docs/getting-started.md b/docs/getting-started.md
index 9c7f1353dd..2d3ce2b968 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -3,8 +3,8 @@
Thanks for trying Netdata! In this getting started guide, we'll quickly walk you through the first steps you should take
after getting Netdata installed.
-Netdata can collect thousands of metrics in real-time without any configuration, but there are some valuable things to
-know to get the most out of Netdata based on your needs.
+Netdata can collect thousands of metrics in real-time and use its database for long-term metrics storage without any
+configuration, but there are some valuable things to know to get the most out of Netdata based on your needs.
We'll skip right into some technical details, so if you're brand-new to monitoring the health and performance of systems
and applications, our [**step-by-step tutorial**](step-by-step/step-00.md) might be a better fit.
@@ -42,13 +42,30 @@ Once you save your changes, [restart Netdata](#start-stop-and-restart-netdata) t
**What's next?**:
-- [Change how long Netdata stores metrics](#change-how-long-netdata-stores-metrics) by either increasing the `history`
- option or switching to the database engine.
+- [Change how long Netdata stores metrics](#change-how-long-netdata-stores-metrics) by changing the `page cache size`
+ and `dbengine disk space` settings in `netdata.conf`.
- Move Netdata's dashboard to a [different port](https://docs.netdata.cloud/web/server/) or enable TLS/HTTPS
encryption.
- See all the `netdata.conf` options in our [daemon configuration documentation](../daemon/config/).
- Run your own [registry](../registry/README.md#run-your-own-registry).
+## Change how long Netdata stores metrics
+
+Netdata can store long-term, historical metrics out of the box. A custom database uses RAM to store recent metrics,
+ensuring dashboards and API queries are extremely responsive, while "spilling" historical metrics to disk. This
+configuration keeps RAM usage low while allowing for long-term, on-disk metrics storage.
+
+You can tweak this custom _database engine_ to store a much larger dataset than your system's available RAM,
+particularly if you allow Netdata to use slightly more RAM and disk space than the default configuration.
+
+Read our tutorial, [**Changing how long Netdata stores metrics**](../docs/tutorials/longer-metrics-storage.md), to learn
+more.
+
+**What's next?**:
+
+- Learn more about the [memory requirements for the database engine](../database/engine/README.md#memory-requirements)
+ to understand how much RAM/disk space you should commit to storing historical metrics.
+
## Collect data from more sources
When Netdata _starts_, it auto-detects dozens of **data sources**, such as database servers, web servers, and more. To
@@ -162,25 +179,6 @@ Find the `SEND_EMAIL="YES"` line and change it to `SEND_EMAIL="NO"`.
- See all the alarm options via the [health configuration reference](../health/REFERENCE.md).
- Add a new notification method, like [Slack](../health/notifications/slack/).
-## Change how long Netdata stores metrics
-
-By default, Netdata uses a custom database which uses both RAM and the disk to store metrics. Recent metrics are stored
-in the system's RAM to keep access fast, while historical metrics are "spilled" to disk to keep RAM usage low.
-
-This custom database, which we call the _database engine_, allows you to store a much larger dataset than your system's
-available RAM.
-
-If you're not sure whether you're using the database engine, or want to tweak the default settings to store even more
-historical metrics, check out our tutorial: [**Changing how long Netdata stores
-metrics**](../docs/tutorials/longer-metrics-storage.md).
-
-**What's next?**:
-
-- Learn more about the [memory requirements for the database engine](../database/engine/README.md#memory-requirements)
- to understand how much RAM/disk space you should commit to storing historical metrics.
-- Read up on the memory requirements of the [round-robin database](../database/), or figure out whether your system
- has KSM enabled, which can [reduce the default database's memory usage](../database/README.md#ksm) by about 60%.
-
## Monitoring multiple systems with Netdata
If you have Netdata installed on multiple systems, you can have them all appear in the **My nodes** menu at the top-left
diff --git a/docs/step-by-step/step-00.md b/docs/step-by-step/step-00.md
index f8f95aed09..f4959b8b0a 100644
--- a/docs/step-by-step/step-00.md
+++ b/docs/step-by-step/step-00.md
@@ -98,8 +98,9 @@ you choose. You can even monitor many systems from a single HTML file.
[Step 9. Long-term metrics storage](step-09.md)
-Want to store lots of real-time metrics from Netdata? Tweak our custom database to your heart's content. Want to take
-your Netdata metrics elsewhere? We're happy to help you archive data to Prometheus, MongoDB, TimescaleDB, and others.
+By default, Netdata can store lots of real-time metrics, but you can also tweak our custom database engine to your
+heart's content. Want to take your Netdata metrics elsewhere? We're happy to help you archive data to Prometheus,
+MongoDB, TimescaleDB, and others.
[Step 10. Set up a proxy](step-10.md)
diff --git a/docs/step-by-step/step-01.md b/docs/step-by-step/step-01.md
index fb959c07e8..91be268f73 100644
--- a/docs/step-by-step/step-01.md
+++ b/docs/step-by-step/step-01.md
@@ -39,9 +39,9 @@ we'll cover throughout this tutorial.
with hundreds of charts, is your main source of information about the health and performance of your systems/
applications. We designed the dashboard with anomaly detection and quick analysis in mind. We'll return to
dashboard-related topics in both [step 7](step-07.md) and [step 8](step-08.md).
-- **Netdata Cloud** is our SaaS toolkit that helps Netdata users monitor the health and performance of entire
- infrastructures, whether they are two or two thousand (or more!) systems. We'll cover Netdata Cloud in [step
- 3](step-03.md).
+- **Long-term metrics storage** by default. With our new database engine, you can store days, weeks, or months of
+ per-second historical metrics. Or you can archive metrics to another database, like MongoDB or Prometheus. We'll
+ cover all these options in [step 9](step-09.md).
- **No configuration necessary**. Without any configuration, you'll get thousands of real-time metrics and hundreds of
alarms designed by our community of sysadmin experts. But you _can_ configure Netdata in a lot of ways, some of
which we'll cover in [step 4](step-04.md).
@@ -53,9 +53,9 @@ we'll cover throughout this tutorial.
into how you can tune alarms, write your own alarm, and enable two types of notifications.
- **High-speed, low-resource collectors** that allow you to collect thousands of metrics every second while using only
a fraction of your system's CPU resources and a few MiB of RAM.
-- **Long-term metrics storage**. With our new database engine, you can store days, weeks, or months of per-second
- historical metrics. Or you can archive metrics to another database, like MongoDB or Prometheus. We'll cover all
- these options in [step 9](step-09.md).
+- **Netdata Cloud** is our SaaS toolkit that helps Netdata users monitor the health and performance of entire
+ infrastructures, whether they are two or two thousand (or more!) systems. We'll cover Netdata Cloud in [step
+ 3](step-03.md).
## Why you should use Netdata
@@ -82,7 +82,8 @@ For example, Netdata can [collect 100,000 metrics](https://github.com/netdata/ne
using only 9% of a single server-grade CPU core!
By decentralizing monitoring and emphasizing speed at every turn, Netdata helps you scale your health monitoring and
-performance troubleshooting to an infrastructure of every size. _And_ you get to keep per-second metrics.
+performance troubleshooting to an infrastructure of every size. _And_ you get to keep per-second metrics in long-term
+storage thanks to the database engine.
### Unlimited metrics
diff --git a/docs/step-by-step/step-04.md b/docs/step-by-step/step-04.md
index 529d31f3a0..31ab5c1706 100644
--- a/docs/step-by-step/step-04.md
+++ b/docs/step-by-step/step-04.md
@@ -117,7 +117,9 @@ Once you're done, restart Netdata and refresh the dashboard. Say hello to your r
![Animated GIF of editing the hostname option in
netdata.conf](https://user-images.githubusercontent.com/1153921/65470784-86e5b980-de21-11e9-87bf-fabec7989738.gif)
-Netdata has dozens upon dozens of options you can change. To see them all, read our [daemon configuration](../../daemon/config/).
+Netdata has dozens upon dozens of options you can change. To see them all, read our [daemon
+configuration](../../daemon/config/), or hop into our popular tutorial on [increasing long-term metrics
+storage](../tutorials/longer-metrics-storage.md).
## What's next?
diff --git a/docs/tutorials/longer-metrics-storage.md b/docs/tutorials/longer-metrics-storage.md
index fb64ca01ec..a511d74761 100644
--- a/docs/tutorials/longer-metrics-storage.md
+++ b/docs/tutorials/longer-metrics-storage.md
@@ -3,9 +3,10 @@
Netdata helps you collect thousands of system and application metrics every second, but what about storing them for the
long term?
-Many people think Netdata can only store about an hour's worth of real-time metrics, but that's just the default
-configuration today. With the right settings, Netdata is quite capable of efficiently storing hours or days worth of
-historical, per-second metrics without having to rely on a [backend](../../backends/).
+Many people think Netdata can only store about an hour's worth of real-time metrics, but that's simply not true any
+more. With the right settings, Netdata is quite capable of efficiently storing hours or days worth of historical,
+per-second metrics without having to rely on a [backend](../../backends/) or [exporting
+connector](../../exporting/README.md).
This tutorial gives two options for configuring Netdata to store more metrics. **We recommend the default [database
engine](#using-the-database-engine)**, but you can stick with or switch to the round-robin database if you prefer.