diff options
author | Costa Tsaousis <costa@netdata.cloud> | 2023-02-15 21:16:29 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-02-15 21:16:29 +0200 |
commit | d2daa19bf53c9a8cb781c8e50a86b9961b0503a9 (patch) | |
tree | 8d8b744138c28e010a24456aee55447d31a719bd /collectors/slabinfo.plugin | |
parent | 37a918ae2bc996fc881ab60042ae5a8f434f4c52 (diff) |
JSON internal API, IEEE754 base64/hex streaming, weights endpoint optimization (#14493)
* first work on standardizing json formatting
* renamed old grouping to time_grouping and added group_by
* add dummy functions to enable compilation
* buffer json api work
* jsonwrap opening with buffer_json_X() functions
* cleanup
* storage for quotes
* optimize buffer printing for both numbers and strings
* removed ; from define
* contexts json generation using the new json functions
* fix buffer overflow at unit test
* weights endpoint using new json api
* fixes to weights endpoint
* check buffer overflow on all buffer functions
* do synchronous queries for weights
* buffer_flush() now resets json state too
* content type typedef
* print double values that are above the max 64-bit value
* str2ndd() can now parse values above UINT64_MAX
* faster number parsing by avoiding double calculations as much as possible
* faster number parsing
* faster hex parsing
* accurate printing and parsing of double values, even for very large numbers that cannot fit in 64bit integers
* full printing and parsing without using library functions - and related unit tests
* added IEEE754 streaming capability to enable streaming of double values in hex
* streaming and replication to transfer all values in hex
* use our own str2ndd for set2
* remove subnormal check from ieee
* base64 encoding for numbers, instead of hex
* when increasing double precision, also make sure the fractional number printed is aligned to the wanted precision
* str2ndd_encoded() parses all encoding formats, including integers
* prevent uninitialized use
* /api/v1/info using the new json API
* Fix error when compiling with --disable-ml
* Remove redundant 'buffer_unittest' declaration
* Fix formatting
* Fix formatting
* Fix formatting
* fix buffer unit test
* apps.plugin using the new JSON API
* make sure the metrics registry does not accept negative timestamps
* do not allow pages with negative timestamps to be loaded from db files; do not accept pages with negative timestamps in the cache
* Fix more formatting
---------
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
Diffstat (limited to 'collectors/slabinfo.plugin')
-rw-r--r-- | collectors/slabinfo.plugin/slabinfo.c | 26 |
1 files changed, 13 insertions, 13 deletions
diff --git a/collectors/slabinfo.plugin/slabinfo.c b/collectors/slabinfo.plugin/slabinfo.c index 52b53cd20d..25b96e386e 100644 --- a/collectors/slabinfo.plugin/slabinfo.c +++ b/collectors/slabinfo.plugin/slabinfo.c @@ -171,19 +171,19 @@ struct slabinfo *read_file_slabinfo() { char *name = procfile_lineword(ff, l, 0); struct slabinfo *s = get_slabstruct(name); - s->active_objs = str2uint64_t(procfile_lineword(ff, l, 1)); - s->num_objs = str2uint64_t(procfile_lineword(ff, l, 2)); - s->obj_size = str2uint64_t(procfile_lineword(ff, l, 3)); - s->obj_per_slab = str2uint64_t(procfile_lineword(ff, l, 4)); - s->pages_per_slab = str2uint64_t(procfile_lineword(ff, l, 5)); - - s->tune_limit = str2uint64_t(procfile_lineword(ff, l, 7)); - s->tune_batchcnt = str2uint64_t(procfile_lineword(ff, l, 8)); - s->tune_shared_factor = str2uint64_t(procfile_lineword(ff, l, 9)); - - s->data_active_slabs = str2uint64_t(procfile_lineword(ff, l, 11)); - s->data_num_slabs = str2uint64_t(procfile_lineword(ff, l, 12)); - s->data_shared_avail = str2uint64_t(procfile_lineword(ff, l, 13)); + s->active_objs = str2uint64_t(procfile_lineword(ff, l, 1), NULL); + s->num_objs = str2uint64_t(procfile_lineword(ff, l, 2), NULL); + s->obj_size = str2uint64_t(procfile_lineword(ff, l, 3), NULL); + s->obj_per_slab = str2uint64_t(procfile_lineword(ff, l, 4), NULL); + s->pages_per_slab = str2uint64_t(procfile_lineword(ff, l, 5), NULL); + + s->tune_limit = str2uint64_t(procfile_lineword(ff, l, 7), NULL); + s->tune_batchcnt = str2uint64_t(procfile_lineword(ff, l, 8), NULL); + s->tune_shared_factor = str2uint64_t(procfile_lineword(ff, l, 9), NULL); + + s->data_active_slabs = str2uint64_t(procfile_lineword(ff, l, 11), NULL); + s->data_num_slabs = str2uint64_t(procfile_lineword(ff, l, 12), NULL); + s->data_shared_avail = str2uint64_t(procfile_lineword(ff, l, 13), NULL); uint32_t memperslab = s->pages_per_slab * slab_pagesize; // Internal fragmentation: loss per slab, due to objects not being a multiple of pagesize |