summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/netdata-agent/sizing-netdata-agents/README.md4
-rw-r--r--docs/store/change-metrics-storage.md6
2 files changed, 5 insertions, 5 deletions
diff --git a/docs/netdata-agent/sizing-netdata-agents/README.md b/docs/netdata-agent/sizing-netdata-agents/README.md
index b945dc56c6..12b276849c 100644
--- a/docs/netdata-agent/sizing-netdata-agents/README.md
+++ b/docs/netdata-agent/sizing-netdata-agents/README.md
@@ -20,11 +20,11 @@ This is a map of how Netdata **features impact resources utilization**:
Lowering the data collection frequency from every-second to every-2-seconds, will make Netdata use half the CPU utilization. So, CPU utilization is proportional to the data collection frequency.
-3. **Database Mode and Tiers**: By default Netdata stores metrics in 3 database tiers: high-resolution, mid-resolution, low-resolution. All database tiers are updated in parallel during data collection, and depending on the query duration Netdata may consult one or more tiers to optimize the resources required to satisfy it.
+3. **Database Mode and Tiers**: By default Netdata stores metrics in 3 database tiers: high-resolution, mid-resolution, and low-resolution. All database tiers are updated in parallel during data collection, and depending on the query duration Netdata may consult one or more tiers to optimize the resources required to satisfy it.
The number of database tiers affects the memory requirements of Netdata. Going from 3-tiers to 1-tier, will make Netdata use half the memory. Of course metrics retention will also be limited to 1 tier.
-4. **Machine Learning**: Byt default Netdata trains multiple machine learning models for every metric collected, to learn its behavior and detect anomalies. Machine Learning is a CPU intensive process and affects the overall CPU utilization of Netdata.
+4. **Machine Learning**: By default Netdata trains multiple machine learning models for every metric collected, to learn its behavior and detect anomalies. Machine Learning is a CPU intensive process and affects the overall CPU utilization of Netdata.
5. **Streaming Compression**: When using Netdata in Parent-Child configurations to create Metrics Centralization Points, the compression algorithm used greatly affects CPU utilization and bandwidth consumption.
diff --git a/docs/store/change-metrics-storage.md b/docs/store/change-metrics-storage.md
index 133d6ca260..0d8022596c 100644
--- a/docs/store/change-metrics-storage.md
+++ b/docs/store/change-metrics-storage.md
@@ -49,7 +49,7 @@ the `update every iterations` of the tiers, to stay under the limit.
The exact retention that can be achieved by each tier depends on the number of metrics collected. The more
the metrics, the smaller the retention that will fit in a given size. The general rule is that Netdata needs
-about **1 byte per data point on disk for tier 0**, and **4 bytes per data point on disk for tier 1 and above**.
+about **1 byte per data point on disk for tier 0**, and **6 bytes per data point on disk for tier 1** and **16 bytes per data point on disk for tier 2 and above**.
So, for 1000 metrics collected per second and 256 MB for tier 0, Netdata will store about:
@@ -60,13 +60,13 @@ So, for 1000 metrics collected per second and 256 MB for tier 0, Netdata will st
At tier 1 (per minute):
```
-128MB on disk / 4 bytes per point / 1000 metrics => 32k points per metric / (24 hr * 60 min) ~= 22 days
+128MB on disk / 6 bytes per point / 1000 metrics => 21k points per metric / (24 hr * 60 min) ~= 15 days
```
At tier 2 (per hour):
```
-64MB on disk / 4 bytes per point / 1000 metrics => 16k points per metric / 24 hr per day ~= 2 years
+64MB on disk / 16 bytes per point / 1000 metrics => 4k points per metric / 24 hr per day ~= 0.5 years
```
Of course double the metrics, half the retention. There are more factors that affect retention. The number