summaryrefslogtreecommitdiffstats
path: root/collectors/proc.plugin
diff options
context:
space:
mode:
authorChris Akritidis <43294513+cakrit@users.noreply.github.com>2018-11-12 21:34:59 +0100
committerCosta Tsaousis <costa@tsaousis.gr>2018-11-12 22:34:59 +0200
commit3aae8f6c2ceea7f74cde61a2044aa50c5f252fae (patch)
tree781412f323c0183c520e4ff1c9fdac53a900b463 /collectors/proc.plugin
parenta29b433526f56fe983d88dc4e86585bad65916c4 (diff)
Htmldoc (#4607)
* First html documentation debug set * Test 2 * Relative path changed * Updated comments * Cleanup, installation draft added * fixes * test * test * test * First html documentation debug set * Test 2 * Relative path changed * Updated comments * Cleanup, installation draft added * fixes * test * test * test * First set of major cleanup/deduplication * 2nd major cleanup * update getting started structure * Cleanup in using netdata * Final cleanup/deduplication * Added initial CONTRIBUTING.md, updated some info related to contributing on the orchestrators * Removed Why-Netdata (included in new README in master), added link to CONTRIBUTING.md * First html documentation debug set * Updated Makefile.am to ignore the new md and htmldoc generation files * Removing files from rebase * First html documentation debug set * Test 2 * Relative path changed * Updated comments * Cleanup, installation draft added * fixes * test * test * test * First html documentation debug set * Test 2 * Relative path changed * Updated comments * Cleanup, installation draft added * test * test * First set of major cleanup/deduplication * 2nd major cleanup * update getting started structure * Cleanup in using netdata * Final cleanup/deduplication * Added initial CONTRIBUTING.md, updated some info related to contributing on the orchestrators * Removed Why-Netdata (included in new README in master), added link to CONTRIBUTING.md * First html documentation debug set * Updated Makefile.am to ignore the new md and htmldoc generation files * Removing files from rebase * Fixed Makefile.am * Same line header and badges * Fixed broken link * CPU monitoring is in apps plugin * Removed obsolete files * Remove obsolete files * - Make the Health API part of health/README.md new file web/api/health/README.md - Make installer/LAUNCH.md part of deamon/README.md - Move installer/MAINTAINERS.md to packaging/maintainers/README.md - Move installer/DOCKER.md to docker/README.md - Move system/README.md to daemon/config/README.md - Move web/CUSTOM-DASHBOARDS.md to web/gui/custom/README.md - Move web/CONFLUENCE-DASHBOARDS.md to web/gui/confluence/README.md * Resolve codacy issue $(..) syntax instead of `..` * Fix following warnings and add svgs to the data_structures/README.md - CHANGELOG.md - CODE_OF_CONDUCT.md - CONTRIBUTORS.md - REDISTRIBUTED.md - diagrams/data_structures/README.md - docker/README.md WARNING - Documentation file 'README.md' contains a link to 'collectors/plugins.d' which does not exist in the documentation directory. WARNING - Documentation file 'README.md' contains a link to 'collectors/statsd.plugin' which does not exist in the documentation directory. WARNING - Documentation file 'CONTRIBUTING.md' contains a link to 'web/CUSTOM-DASHBOARDS.md' which does not exist in the documentation directory. WARNING - Documentation file 'CONTRIBUTING.md' contains a link to 'web/CONFLUENCE-DASHBOARDS.md' which does not exist in the documentation directory. * Wrong urls in data_structures/README.md svgs * Fix svg URLs number 2 * Modify the first line of the main README.md, to enable proper static html generation. Executed after copying the file to htmldoc/src * Added back Why Netdata * Fixed link to registry in Why-Netdata.md * Added Why-Netdata to buildyaml and to Makefile.am * Replaced http links causing mixed content warnings * Made buildhtml ignore the directory node_modules created by Netlify * Corrected CONTRIBUTING.MD to CONTRIBUTING.md
Diffstat (limited to 'collectors/proc.plugin')
-rw-r--r--collectors/proc.plugin/README.md137
1 files changed, 69 insertions, 68 deletions
diff --git a/collectors/proc.plugin/README.md b/collectors/proc.plugin/README.md
index 9d444f3d03..f3b2ecb4da 100644
--- a/collectors/proc.plugin/README.md
+++ b/collectors/proc.plugin/README.md
@@ -1,4 +1,3 @@
-
# proc.plugin
- `/proc/net/dev` (all network interfaces for all their values)
@@ -25,7 +24,7 @@
---
-# Monitoring Disks
+## Monitoring Disks
> Live demo of disk monitoring at: **[http://london.netdata.rocks](https://registry.my-netdata.io/#menu_disk)**
@@ -33,75 +32,45 @@ Performance monitoring for Linux disks is quite complicated. The main reason is
Hopefully, the Linux kernel provides many metrics that can provide deep insights of what our disks our doing. The kernel measures all these metrics on all layers of storage: **virtual disks**, **physical disks** and **partitions of disks**.
-Let's see the list of metrics provided by netdata for each of the above:
-
-### I/O bandwidth/s (kb/s)
-
-The amount of data transferred from and to the disk.
-
-### I/O operations/s
-
-The number of I/O operations completed.
-
-### Queued I/O operations
-
-The number of currently queued I/O operations. For traditional disks that execute commands one after another, one of them is being run by the disk and the rest are just waiting in a queue.
-
-### Backlog size (time in ms)
-
-The expected duration of the currently queued I/O operations.
-
-### Utilization (time percentage)
-
-The percentage of time the disk was busy with something. This is a very interesting metric, since for most disks, that execute commands sequentially, **this is the key indication of congestion**. A sequential disk that is 100% of the available time busy, has no time to do anything more, so even if the bandwidth or the number of operations executed by the disk is low, its capacity has been reached.
-
-Of course, for newer disk technologies (like fusion cards) that are capable to execute multiple commands in parallel, this metric is just meaningless.
-
-### Average I/O operation time (ms)
-
-The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
-
-### Average I/O operation size (kb)
-
-The average amount of data of the completed I/O operations.
-
-### Average Service Time (ms)
-
-The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading.
-
-### Merged I/O operations/s
-
-The Linux kernel is capable of merging I/O operations. So, if two requests to read data from the disk are adjacent, the Linux kernel may merge them to one before giving them to disk. This metric measures the number of operations that have been merged by the Linux kernel.
-
-### Total I/O time
-
-The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel.
-
-### Space usage
-
-For mounted disks, netdata will provide a chart for their space, with 3 dimensions:
-
-1. free
-2. used
-3. reserved for root
-
-### inode usage
-
-For mounted disks, netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
-
-1. free
-2. used
-3. reserved for root
-
----
-
-## disk names
+### Monitored disk metrics
+
+- I/O bandwidth/s (kb/s)
+ The amount of data transferred from and to the disk.
+- I/O operations/s
+ The number of I/O operations completed.
+- Queued I/O operations
+ The number of currently queued I/O operations. For traditional disks that execute commands one after another, one of them is being run by the disk and the rest are just waiting in a queue.
+- Backlog size (time in ms)
+ The expected duration of the currently queued I/O operations.
+- Utilization (time percentage)
+ The percentage of time the disk was busy with something. This is a very interesting metric, since for most disks, that execute commands sequentially, **this is the key indication of congestion**. A sequential disk that is 100% of the available time busy, has no time to do anything more, so even if the bandwidth or the number of operations executed by the disk is low, its capacity has been reached.
+ Of course, for newer disk technologies (like fusion cards) that are capable to execute multiple commands in parallel, this metric is just meaningless.
+- Average I/O operation time (ms)
+ The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
+- Average I/O operation size (kb)
+ The average amount of data of the completed I/O operations.
+- Average Service Time (ms)
+ The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading.
+- Merged I/O operations/s
+ The Linux kernel is capable of merging I/O operations. So, if two requests to read data from the disk are adjacent, the Linux kernel may merge them to one before giving them to disk. This metric measures the number of operations that have been merged by the Linux kernel.
+- Total I/O time
+ The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel.
+- Space usage
+ For mounted disks, netdata will provide a chart for their space, with 3 dimensions:
+ 1. free
+ 2. used
+ 3. reserved for root
+- inode usage
+ For mounted disks, netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
+ 1. free
+ 2. used
+ 3. reserved for root
+
+### disk names
netdata will automatically set the name of disks on the dashboard, from the mount point they are mounted, of course only when they are mounted. Changes in mount points are not currently detected (you will have to restart netdata to change the name of the disk).
----
-
-## performance metrics
+### performance metrics
By default netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though).
@@ -198,3 +167,35 @@ So, to disable performance metrics for all loop devices you could add `performan
performance metrics for disks with major 7 = no
```
+## Linux Anti-DDoS
+
+![image6](https://cloud.githubusercontent.com/assets/2662304/14253733/53550b16-fa95-11e5-8d9d-4ed171df4735.gif)
+
+---
+SYNPROXY is a TCP SYN packets proxy. It can be used to protect any TCP server (like a web server) from SYN floods and similar DDos attacks.
+
+SYNPROXY is a netfilter module, in the Linux kernel (since version 3.12). It is optimized to handle millions of packets per second utilizing all CPUs available without any concurrency locking between the connections.
+
+The net effect of this, is that the real servers will not notice any change during the attack. The valid TCP connections will pass through and served, while the attack will be stopped at the firewall.
+
+To use SYNPROXY on your firewall, please follow our setup guides:
+
+ - **[Working with SYNPROXY](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY)**
+ - **[Working with SYNPROXY and traps](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY-and-traps)**
+
+### Real-time monitoring of Linux Anti-DDoS
+
+netdata is able to monitor in real-time (per second updates) the operation of the Linux Anti-DDoS protection.
+
+It visualizes 4 charts:
+
+1. TCP SYN Packets received on ports operated by SYNPROXY
+2. TCP Cookies (valid, invalid, retransmits)
+3. Connections Reopened
+4. Entries used
+
+Example image:
+
+![ddos](https://cloud.githubusercontent.com/assets/2662304/14398891/6016e3fc-fdf0-11e5-942b-55de6a52cb66.gif)
+
+See Linux Anti-DDoS in action at: **[netdata demo site (with SYNPROXY enabled)](http://london.my-netdata.io/#netfilter_synproxy)**