summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorJoel Hans <joel@netdata.cloud>2020-02-11 06:24:28 -0700
committerGitHub <noreply@github.com>2020-02-11 05:24:28 -0800
commit64dbeb929e2437ba65e628e81a6f546b42dcd4fa (patch)
tree1e848e6d0d29d621ee15b2eccad03796c2f293c0 /docs
parent0a644338ddc5564ed890b62b7f0d10e84a9283ea (diff)
Docs: Promote DB engine/long-term metrics storage more heavily (#8031)
* Fixes to DOCS home and README * Edit conf-guide and getting-started * Add dbengine settings to map * Fix tutorial and step-by-step * Fix artifacts of old memory mode types * A few tweaks * Push a little harder on README * Fix for Markos
Diffstat (limited to 'docs')
-rw-r--r--docs/Performance.md37
-rw-r--r--docs/configuration-guide.md8
-rw-r--r--docs/getting-started.md44
-rw-r--r--docs/step-by-step/step-00.md5
-rw-r--r--docs/step-by-step/step-01.md15
-rw-r--r--docs/step-by-step/step-04.md4
-rw-r--r--docs/tutorials/longer-metrics-storage.md7
7 files changed, 61 insertions, 59 deletions
diff --git a/docs/Performance.md b/docs/Performance.md
index 8205c70ee0..435de0f820 100644
--- a/docs/Performance.md
+++ b/docs/Performance.md
@@ -45,11 +45,15 @@ Netdata runs with the lowest possible process priority, so even if 1000 users ar
To lower the CPU utilization of Netdata when clients are accessing the dashboard, set `web compression level = 1`, or disable web compression completely by setting `enable web responses gzip compression = no`. Both settings are in the `[web]` section.
-## Monitoring a heavy loaded system
+## Monitoring a heavily-loaded system
-Netdata, while running, does not depend on disk I/O (apart its log files and `access.log` is written with buffering enabled and can be disabled). Some plugins that need disk may stop and show gaps during heavy system load, but the Netdata daemon itself should be able to work and collect values from `/proc` and `/sys` and serve web clients accessing it.
+While running, Netdata does not depend much on disk I/O aside from writing to log files and the [database
+engine](../database/engine/README.md) "spilling" historical metrics to disk when it uses all its available RAM.
-Keep in mind that Netdata saves its database when it exits and loads it back when restarted. While it is running though, its DB is only stored in RAM and no I/O takes place for it.
+Under a heavy system load, plugins that need disk may stop and show gaps during heavy system load, but the Netdata
+daemon itself should be able to work and collect values from `/proc` and `/sys` and serve web clients accessing it.
+
+Keep in mind that Netdata saves its database when it exits, and loads it up again when started.
## Netdata process priority
@@ -173,29 +177,22 @@ Normally, you will not need them. To disable them, set:
access log = none
```
-### 5. Set memory mode to RAM
+### 5. Lower Netdata's memory usage
-Setting the memory mode to `ram` will disable loading and saving the round robin database. This will not affect anything while running Netdata, but it might be required if you have very limited storage available.
+You can change the amount of RAM and disk the database engine uses for all charts and their dimensions with the
+following settings in the `[global]` section of `netdata.conf`:
-```
+```conf
[global]
- memory mode = ram
+ # memory mode = dbengine
+ # page cache size = 32
+ # dbengine disk space = 256
```
-### 6. Lower memory requirements
-
-You can set the default size of the round robin database for all charts, using:
-
-```
-[global]
- history = 600
-```
-
-The units for history is `[global].update every` seconds. So if `[global].update every = 6` and `[global].history = 600`, you will have an hour of data ( 6 x 600 = 3.600 ), which will store 600 points per dimension, one every 6 seconds.
-
-Check also [Database](../database) for directions on calculating the size of the round robin database.
+See the [database engine documentation](../database/engine/README.md) or our [tutorial on metrics
+retention](tutorials/longer-metrics-storage.md) for more details on lowering the database engine's memory requirements.
-### 7. Disable gzip compression of responses
+### 6. Disable gzip compression of responses
Gzip compression of the web responses is using more CPU that the rest of Netdata. You can lower the compression level or disable gzip compression completely. You can disable it, like this:
diff --git a/docs/configuration-guide.md b/docs/configuration-guide.md
index 51361b6b76..0c6d934e49 100644
--- a/docs/configuration-guide.md
+++ b/docs/configuration-guide.md
@@ -53,10 +53,12 @@ it there.
### Change what I see
-#### Increase the metrics retention period
+#### Increase the long-term metrics retention period
-Increase `history` in [netdata.conf \[global\]](../daemon/config/README.md#global-section-options). Just ensure you
-understand [how much memory will be required](../database/).
+Increase the values for the `page cache size` and `dbengine disk space` settings in the [`[global]`
+section](../daemon/config/README.md#global-section-options) of `netdata.conf`. Read our tutorial on [increasing
+long-term metrics storage](tutorials/longer-metrics-storage.md) and the [memory requirements for the database
+engine](../database/engine/README.md#memory-requirements).
#### Reduce the data collection frequency
diff --git a/docs/getting-started.md b/docs/getting-started.md
index 9c7f1353dd..2d3ce2b968 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -3,8 +3,8 @@
Thanks for trying Netdata! In this getting started guide, we'll quickly walk you through the first steps you should take
after getting Netdata installed.
-Netdata can collect thousands of metrics in real-time without any configuration, but there are some valuable things to
-know to get the most out of Netdata based on your needs.
+Netdata can collect thousands of metrics in real-time and use its database for long-term metrics storage without any
+configuration, but there are some valuable things to know to get the most out of Netdata based on your needs.
We'll skip right into some technical details, so if you're brand-new to monitoring the health and performance of systems
and applications, our [**step-by-step tutorial**](step-by-step/step-00.md) might be a better fit.
@@ -42,13 +42,30 @@ Once you save your changes, [restart Netdata](#start-stop-and-restart-netdata) t
**What's next?**:
-- [Change how long Netdata stores metrics](#change-how-long-netdata-stores-metrics) by either increasing the `history`
- option or switching to the database engine.
+- [Change how long Netdata stores metrics](#change-how-long-netdata-stores-metrics) by changing the `page cache size`
+ and `dbengine disk space` settings in `netdata.conf`.
- Move Netdata's dashboard to a [different port](https://docs.netdata.cloud/web/server/) or enable TLS/HTTPS
encryption.
- See all the `netdata.conf` options in our [daemon configuration documentation](../daemon/config/).
- Run your own [registry](../registry/README.md#run-your-own-registry).
+## Change how long Netdata stores metrics
+
+Netdata can store long-term, historical metrics out of the box. A custom database uses RAM to store recent metrics,
+ensuring dashboards and API queries are extremely responsive, while "spilling" historical metrics to disk. This
+configuration keeps RAM usage low while allowing for long-term, on-disk metrics storage.
+
+You can tweak this custom _database engine_ to store a much larger dataset than your system's available RAM,
+particularly if you allow Netdata to use slightly more RAM and disk space than the default configuration.
+
+Read our tutorial, [**Changing how long Netdata stores metrics**](../docs/tutorials/longer-metrics-storage.md), to learn
+more.
+
+**What's next?**:
+
+- Learn more about the [memory requirements for the database engine](../database/engine/README.md#memory-requirements)
+ to understand how much RAM/disk space you should commit to storing historical metrics.
+
## Collect data from more sources
When Netdata _starts_, it auto-detects dozens of **data sources**, such as database servers, web servers, and more. To
@@ -162,25 +179,6 @@ Find the `SEND_EMAIL="YES"` line and change it to `SEND_EMAIL="NO"`.
- See all the alarm options via the [health configuration reference](../health/REFERENCE.md).
- Add a new notification method, like [Slack](../health/notifications/slack/).
-## Change how long Netdata stores metrics
-
-By default, Netdata uses a custom database which uses both RAM and the disk to store metrics. Recent metrics are stored
-in the system's RAM to keep access fast, while historical metrics are "spilled" to disk to keep RAM usage low.
-
-This custom database, which we call the _database engine_, allows you to store a much larger dataset than your system's
-available RAM.
-
-If you're not sure whether you're using the database engine, or want to tweak the default settings to store even more
-historical metrics, check out our tutorial: [**Changing how long Netdata stores
-metrics**](../docs/tutorials/longer-metrics-storage.md).
-
-**What's next?**:
-
-- Learn more about the [memory requirements for the database engine](../database/engine/README.md#memory-requirements)
- to understand how much RAM/disk space you should commit to storing historical metrics.
-- Read up on the memory requirements of the [round-robin database](../database/), or figure out whether your system
- has KSM enabled, which can [reduce the default database's memory usage](../database/README.md#ksm) by about 60%.
-
## Monitoring multiple systems with Netdata
If you have Netdata installed on multiple systems, you can have them all appear in the **My nodes** menu at the top-left
diff --git a/docs/step-by-step/step-00.md b/docs/step-by-step/step-00.md
index f8f95aed09..f4959b8b0a 100644
--- a/docs/step-by-step/step-00.md
+++ b/docs/step-by-step/step-00.md
@@ -98,8 +98,9 @@ you choose. You can even monitor many systems from a single HTML file.
[Step 9. Long-term metrics storage](step-09.md)
-Want to store lots of real-time metrics from Netdata? Tweak our custom database to your heart's content. Want to take
-your Netdata metrics elsewhere? We're happy to help you archive data to Prometheus, MongoDB, TimescaleDB, and others.
+By default, Netdata can store lots of real-time metrics, but you can also tweak our custom database engine to your
+heart's content. Want to take your Netdata metrics elsewhere? We're happy to help you archive data to Prometheus,
+MongoDB, TimescaleDB, and others.
[Step 10. Set up a proxy](step-10.md)
diff --git a/docs/step-by-step/step-01.md b/docs/step-by-step/step-01.md
index fb959c07e8..91be268f73 100644
--- a/docs/step-by-step/step-01.md
+++ b/docs/step-by-step/step-01.md
@@ -39,9 +39,9 @@ we'll cover throughout this tutorial.
with hundreds of charts, is your main source of information about the health and performance of your systems/
applications. We designed the dashboard with anomaly detection and quick analysis in mind. We'll return to
dashboard-related topics in both [step 7](step-07.md) and [step 8](step-08.md).
-- **Netdata Cloud** is our SaaS toolkit that helps Netdata users monitor the health and performance of entire
- infrastructures, whether they are two or two thousand (or more!) systems. We'll cover Netdata Cloud in [step
- 3](step-03.md).
+- **Long-term metrics storage** by default. With our new database engine, you can store days, weeks, or months of
+ per-second historical metrics. Or you can archive metrics to another database, like MongoDB or Prometheus. We'll
+ cover all these options in [step 9](step-09.md).
- **No configuration necessary**. Without any configuration, you'll get thousands of real-time metrics and hundreds of
alarms designed by our community of sysadmin experts. But you _can_ configure Netdata in a lot of ways, some of
which we'll cover in [step 4](step-04.md).
@@ -53,9 +53,9 @@ we'll cover throughout this tutorial.
into how you can tune alarms, write your own alarm, and enable two types of notifications.
- **High-speed, low-resource collectors** that allow you to collect thousands of metrics every second while using only
a fraction of your system's CPU resources and a few MiB of RAM.
-- **Long-term metrics storage**. With our new database engine, you can store days, weeks, or months of per-second
- historical metrics. Or you can archive metrics to another database, like MongoDB or Prometheus. We'll cover all
- these options in [step 9](step-09.md).
+- **Netdata Cloud** is our SaaS toolkit that helps Netdata users monitor the health and performance of entire
+ infrastructures, whether they are two or two thousand (or more!) systems. We'll cover Netdata Cloud in [step
+ 3](step-03.md).
## Why you should use Netdata
@@ -82,7 +82,8 @@ For example, Netdata can [collect 100,000 metrics](https://github.com/netdata/ne
using only 9% of a single server-grade CPU core!
By decentralizing monitoring and emphasizing speed at every turn, Netdata helps you scale your health monitoring and
-performance troubleshooting to an infrastructure of every size. _And_ you get to keep per-second metrics.
+performance troubleshooting to an infrastructure of every size. _And_ you get to keep per-second metrics in long-term
+storage thanks to the database engine.
### Unlimited metrics
diff --git a/docs/step-by-step/step-04.md b/docs/step-by-step/step-04.md
index 529d31f3a0..31ab5c1706 100644
--- a/docs/step-by-step/step-04.md
+++ b/docs/step-by-step/step-04.md
@@ -117,7 +117,9 @@ Once you're done, restart Netdata and refresh the dashboard. Say hello to your r
![Animated GIF of editing the hostname option in
netdata.conf](https://user-images.githubusercontent.com/1153921/65470784-86e5b980-de21-11e9-87bf-fabec7989738.gif)
-Netdata has dozens upon dozens of options you can change. To see them all, read our [daemon configuration](../../daemon/config/).
+Netdata has dozens upon dozens of options you can change. To see them all, read our [daemon
+configuration](../../daemon/config/), or hop into our popular tutorial on [increasing long-term metrics
+storage](../tutorials/longer-metrics-storage.md).
## What's next?
diff --git a/docs/tutorials/longer-metrics-storage.md b/docs/tutorials/longer-metrics-storage.md
index fb64ca01ec..a511d74761 100644
--- a/docs/tutorials/longer-metrics-storage.md
+++ b/docs/tutorials/longer-metrics-storage.md
@@ -3,9 +3,10 @@
Netdata helps you collect thousands of system and application metrics every second, but what about storing them for the
long term?
-Many people think Netdata can only store about an hour's worth of real-time metrics, but that's just the default
-configuration today. With the right settings, Netdata is quite capable of efficiently storing hours or days worth of
-historical, per-second metrics without having to rely on a [backend](../../backends/).
+Many people think Netdata can only store about an hour's worth of real-time metrics, but that's simply not true any
+more. With the right settings, Netdata is quite capable of efficiently storing hours or days worth of historical,
+per-second metrics without having to rely on a [backend](../../backends/) or [exporting
+connector](../../exporting/README.md).
This tutorial gives two options for configuring Netdata to store more metrics. **We recommend the default [database
engine](#using-the-database-engine)**, but you can stick with or switch to the round-robin database if you prefer.