summaryrefslogtreecommitdiffstats
path: root/libnetdata/adaptive_resortable_list
diff options
context:
space:
mode:
authorPromise Akpan <akpanpromise@hotmail.com>2019-08-15 12:06:39 +0100
committerChris Akritidis <43294513+cakrit@users.noreply.github.com>2019-08-15 13:06:39 +0200
commitf5006d51e8caf9148d393eb68d53dc9fcd28b7b6 (patch)
tree03b757236d6b45e46813a4a875c77dc775e5f896 /libnetdata/adaptive_resortable_list
parent69172fd57472df75d877f43de2dcc693c48ab5c0 (diff)
Fix Markdown Lint warnings (#6664)
* make remark access all directories * detailed fix after autofix by remark lint * cross check autofix for this set of files * crosscheck more files * crosschecking and small fixes * crosscheck autofixed md files
Diffstat (limited to 'libnetdata/adaptive_resortable_list')
-rw-r--r--libnetdata/adaptive_resortable_list/README.md43
1 files changed, 21 insertions, 22 deletions
diff --git a/libnetdata/adaptive_resortable_list/README.md b/libnetdata/adaptive_resortable_list/README.md
index 23c036bb55..0bb99b81cb 100644
--- a/libnetdata/adaptive_resortable_list/README.md
+++ b/libnetdata/adaptive_resortable_list/README.md
@@ -1,4 +1,3 @@
-
# Adaptive Re-sortable List (ARL)
This library allows Netdata to read a series of `name - value` pairs
@@ -19,9 +18,9 @@ linked list will adapt at the next iteration.
During initialization (just once), the caller:
-- calls `arl_create()` to create the ARL
+- calls `arl_create()` to create the ARL
-- calls `arl_expect()` multiple times to register the expected keywords
+- calls `arl_expect()` multiple times to register the expected keywords
The library will call the `processor()` function (given to
`arl_create()`), for each expected keyword found.
@@ -34,16 +33,16 @@ Each `name` keyword may have a different `processor()` (by calling
For each iteration through the data source, the caller:
-- calls `arl_begin()` to initiate a data collection iteration.
- This is to be called just ONCE every time the source is re-evaluated.
+- calls `arl_begin()` to initiate a data collection iteration.
+ This is to be called just ONCE every time the source is re-evaluated.
-- calls `arl_check()` for each entry read from the file.
+- calls `arl_check()` for each entry read from the file.
### Cleanup
When the caller exits:
-- calls `arl_free()` to destroy this and free all memory.
+- calls `arl_free()` to destroy this and free all memory.
### Performance
@@ -68,22 +67,22 @@ In the following test we used alternative methods to process, **1M times**,
a data source like `/proc/meminfo`, already tokenized, in memory,
to extract the same number of expected metrics:
-test|code|string comparison|number parsing|duration
-:---:|:---:|:---:|:---:|:---:|
-1|if-else-if-else-if|`strcmp()`|`strtoull()`|4630.337 ms
-2|nested loops|inline `simple_hash()` and `strcmp()`|`strtoull()`|1597.481 ms
-3|nested loops|inline `simple_hash()` and `strcmp()`|`str2ull()`|923.523 ms
-4|if-else-if-else-if|inline `simple_hash()` and `strcmp()`|`strtoull()`| 854.574 ms
-5|if-else-if-else-if|statement expression `simple_hash()` and `strcmp()`|`strtoull()`|912.013 ms
-6|if-continue|inline `simple_hash()` and `strcmp()`|`strtoull()`|842.279 ms
-7|if-else-if-else-if|inline `simple_hash()` and `strcmp()`|`str2ull()`|602.837 ms
-8|ARL|ARL|`strtoull()`|350.360 ms
-9|ARL|ARL|`str2ull()`|157.237 ms
+|test|code|string comparison|number parsing|duration|
+|:--:|:--:|:---------------:|:------------:|:------:|
+|1|if-else-if-else-if|`strcmp()`|`strtoull()`|4630.337 ms|
+|2|nested loops|inline `simple_hash()` and `strcmp()`|`strtoull()`|1597.481 ms|
+|3|nested loops|inline `simple_hash()` and `strcmp()`|`str2ull()`|923.523 ms|
+|4|if-else-if-else-if|inline `simple_hash()` and `strcmp()`|`strtoull()`|854.574 ms|
+|5|if-else-if-else-if|statement expression `simple_hash()` and `strcmp()`|`strtoull()`|912.013 ms|
+|6|if-continue|inline `simple_hash()` and `strcmp()`|`strtoull()`|842.279 ms|
+|7|if-else-if-else-if|inline `simple_hash()` and `strcmp()`|`str2ull()`|602.837 ms|
+|8|ARL|ARL|`strtoull()`|350.360 ms|
+|9|ARL|ARL|`str2ull()`|157.237 ms|
Compared to unoptimized code (test No 1: 4.6sec):
-
- - before ARL Netdata was using test No **7** with hashing and a custom `str2ull()` to achieve 602ms.
- - the current ARL implementation is test No **9** that needs only 157ms (29 times faster vs unoptimized code, about 4 times faster vs optimized code).
+
+- before ARL Netdata was using test No **7** with hashing and a custom `str2ull()` to achieve 602ms.
+- the current ARL implementation is test No **9** that needs only 157ms (29 times faster vs unoptimized code, about 4 times faster vs optimized code).
[Check the source code of this test](../../tests/profile/benchmark-value-pairs.c).
@@ -92,4 +91,4 @@ Compared to unoptimized code (test No 1: 4.6sec):
Do not use ARL if the a name/keyword may appear more than once in the
source data.
-[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Flibnetdata%2Fadaptive_resortable_list%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
+[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Flibnetdata%2Fadaptive_resortable_list%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)