Age | Commit message (Collapse) | Author |
|
|
|
* Add library to encode/decode Gorilla compressed buffers.
* Code cleanup + fix high-level API for 64 bits.
* Add scripts to build benchmarks and fuzzer.
* Fix CMake builds
* Add license note.
* Return 0 instead of false literal.
|
|
stale plugins; streaming improvements (#15113)
* add information about streaming connections to /api/v2/nodes; reset defer time when sender or receivers connect or disconnect
* make each streaming destination respect its SSL settings
* to not send SSL traffic over non-SSL connection
* keep track of outgoing streaming connection attempts
* retry SSL reads when SSL_read() returns SSL_ERROR_WANT_READ
* Revert "retry SSL reads when SSL_read() returns SSL_ERROR_WANT_READ"
This reverts commit 14c858677c6f2d3b08c94f298e2f45ecdb74c801.
* cleanup SSL connections properly
* initialize SSL in rpt before takeover
* sender should free SSL when talking to a non-SSL destination
* do not shutdown SSL when receiver exits
* restore operation of SIGCHLD when the reaper is not enabled
* create an fgets function that checks for data and times out
* work on error handling of plugins exiting
* remove newlines from logs
* global call to waitid(), caching the result for netdata_pclose() to process
* receiver tid
* parser timeouts in 2 minutes instead of 10
* fix crash when UUID is NULL in SQLite
* abstract sqlite3 parsing for uuid and text
* write proper ssl errors on read and write
* fix for SSL_ERROR_WANT_RETRY_VERIFY
* SSL WANT per function
* unified SSL error logging
* fix compilation warning
* additional logging about parser cleanup
* streaming parser should call the pluginsd parser cleanup
* SSL error handling work
* SSL initialization unification
* check for pending data when receiving SSL response with timeout
* macro to check if an SSL connection has been established
* remove SSL_pending()
* check for SSL macros
* use SSL_peek() to find if there is a response
* SSL renames
* more SSL renames & cleanup
* rrdpush ssl connection function
* abstract all SSL functions into security.c
* keep track of SSL connections and always attempt to use SSL read/write when on SSL connection
* signal openssl to skip certificate validation when configured to do so
* better SSL error handling and logging
* SSL code cleanup
* SSL retry on SSL_connect and SSL_accept
* SSL provide default return value for old compilers
* SSL read/write functions emulate system read/write functions
* fix receive/send timeout and switch from SSL_peek() to SSL_pending()
* remove SSL_pending()
* removed sender auto-retry and debug info for initial recevier response
* ssl skip certificate verification config for web server
* ssl errors log ip and port of the peer
* keep ssl with web_client for its whole lifetime
* thread safe socket peers to text
* use error_limit() for common ssl errors
* cleanup
* more cleanup
* coverity fixes
* ssl error logs include both local and remote ip/port info
* remove obsolete code
|
|
|
|
|
|
* Free active_alerts and claim_id
* Free digest context
* Release memory unconditionally
|
|
JudyLTablesGen could be replaced with a pregenerated file
with 32-bit / 64-bit specific macros. Fixes #14185
|
|
zlib compulsory
|
|
Introduces h2o based web server as an alternative
|
|
|
|
|
|
|
|
* Dump config
* Add charcat and rawcat
* Build incoming response in an buffer
* Allocate a buffer to hold the command response so that we dont have a 4K char limit
* Add a dumpconfig command to output the current netdata.conf
* Remove -W dumpconfig for now
* Fix typo
* Improve help message
|
|
* pull aclk schemas
* resolve capas
* handle checkpoints and removed from health
* build with disable-cloud
* codacy 1
* misc changes
* one more char in hash
* free buffer
* change topic
* misc fixes
* skip removed alert variables
* change hash functions
* use create and destroy for compatibility with older openssl
|
|
|
|
* initial webrtc setup
* missing files
* rewrite of webrtc integration
* initialization and cleanup of webrtc connections
* make it compile without libdatachannel
* add missing webrtc_initialize() function when webrtc is not enabled
* make c++17 optional
* add build/m4/ax_compiler_vendor.m4
* add ax_cxx_compile_stdcxx.m4
* added new m4 files to makefile.am
* id all webrtc connections
* show warning when webrtc is disabled
* fixed message
* moved all webrtc error checking inside webrtc.cpp
* working webrtc connection establishment and cleanup
* remove obsolete code
* rewrote webrtc code in C to remove dependency for c++17
* fixed left-over reference
* detect binary and text messages
* minor fix
* naming of webrtc threads
* added webrtc configuration
* fix for thread_get_name_np()
* smaller web_client memory footprint
* universal web clients cache
* free web clients every 100 uses
* webrtc is now enabled by default only when compiled with internal checks
* webrtc responses to /api/ requests, including LZ4 compression
* fix for binary and text messages
* web_client_cache is now global
* unification of the internal web server API, for web requests, aclk request, webrtc requests
* more cleanup and unification of web client timings
* fixed compiler warnings
* update sent and received bytes
* eliminated of almost all big buffers in web client
* registry now uses the new json generation
* cookies are now an array; fixed redirects
* fix redirects, again
* write cookies directly to the header buffer, eliminating the need for cookie structures in web client
* reset the has_cookies flag
* gathered all web client cleanup to one function
* fixes redirects
* added summary.globals in /api/v2/data response
* ars to arc in /api/v2/data
* properly handle host impersonation
* set the context of mem.numa_nodes
|
|
|
|
/api/v2/weights nonzero output
|
|
|
|
* /api/v2/weights, points key renamed to result
* /api/v2/weights, add node ids in response
* /api/v2/data remove NONZERO flag when all dimensions are zero and fix MIN/MAX grouping and statistics
* /api/v2/data expose view.dimensions.sts{}
* /api/v2 endpoints expose agents and additional info per node, that is needed to unify cloud responses
* /api/v2 nodes output now includes the duration of time spent per node
* jsonwrap view object renames and cleanup
* rework of the statistics returned by the query engine
* swagger work
* swagger work
* more swagger work
* updated swagger json
* added the remaining of the /api/v2 endpoints to swagger
* point.ar has been renamed point.arp
* updated weights endpoint
* fix compilation warnings
|
|
replace uuid_compare() with uuid_memcmp() everywhere where the order is not important but equality is
|
|
* query timestamps are now pre-determined and alignment on timestamps is guarranteed
* turn internal_fatal() to internal_error() to investigate the issue
* handle query when no data exist in the db
* check for non NULL dict when running dictionary garbage collect
* support API v2 requests via ACLK
* add nodes detailed information to /api/v2/nodes
* fixed keys and added dummy nodes for completeness
* added nodes_hard_hash, alerts_hard_hash, alerts_soft_hash; started building a nodes status object to reflect the current status of a node
* make sure replication does not double count charts that are already being replicated
* expose min and max in sts structures
* added view_minimum_value and view_maximum_value; percentage calculation is now an additional pass on the data, removed from formatters; absolute value calculation is now done at the query level, removed from formatters
* respect trimming in percentage calculation; updated swagger
* api/v2/weights preparative work to support multi-node queries - still single node though
* multi-node /api/v2/weights endpoint, supporting all the filtering parameters of /api/v2/data
* when passing the raw option, the query exposes the hidden dimensions
* fix compilation issues on older systems
* the query engine now calculates per dimension min, max, sum, count, anomaly count
* use the macro to calculate storage point anomaly rate
* weights endpoint exposing version hashes
* weights method=value shows min, max, average, sum, count, anomaly count, anomaly rate
* query: expose RESET flag; do not add the same point multiple times to the aggregated point
* weights: more compact output
* weights requests can be interrupted
* all /api/v2 requests can be interrupted and timeout
* allow relative timestamps in weights
* fix macos compilation warnings
* Revert "fix macos compilation warnings"
This reverts commit 8a1d24e41e9b58de566ac59f0c4b1c465bcc0592.
* /api/v2/data group-by now works on dimension names, not ids
* /api/v2/weights does not query metrics without retention and new output format
* /api/v2/weights value and anomaly queries do context queries when contexts are filtered; query timeout is now always in ms
|
|
* bundle libyaml
* remove comment
* .github/workflows/coverity.yml
* add to coverity and tests
* add deps
* add to netdata.spec.in
* add to contrib/debia/control
* remove extra gentoo libyaml
|
|
* expose the order of group by
* key renames in json wrapper v2
* added group by context and group by units
* added view_average_values
* fix for view_average_values when percentage is specified
* option group-by-labels is enabling the exposure of all the labels that are used for each of the final grouped dimensions
* when executing group by queries, allocate one dimension data at a time - not all of them
* respect hidden dimensions
* cancel running data query on socket error
* use poll to detect socket errors
* use POLLRDHUP to detect half closed connections
* make sure POLLRDHUP is available
* do not destroy aral-by-size arals
* completed documentation of /api/v2/data.
* moved min, max back to view; updated swagger yaml and json
* default format for /api/v2/data is json2
|
|
* max web request size to 64KB
* fix the request too big message
* increase max request reading tries to 100
* support for bigger web requests
* add "avg" as a shortcut for "average" to both group by aggregation and time aggregation; discard the last partial points of a query in play mode, up to max update every; group by hidden dimensions too
* better implementation for partial data trimming
* added group_by=selected to return only one dimension for all selected metrics
* fix acceptance of group_by=selected
* passing option "raw" disables partial data trimming
* remove obsolete option "plan"; use "debug"
* fix view.min and view.max calculation - there were 2 bugs: a) min and max were reset for every row and b) min and max were corrupted by GBC and AR printing
* per row annotations
* added time column to point annotations
* disable caching for /api/v2/contexts responses
* added api format json2 that returns an array for each points, having all the point values and annotations in them
* work on swagger about /api/v2
* prevent infinite loop
* cleanup and swagger work
* allow negative simple pattern expressions to work as expected
* do not lookup in the dictionary empty names
* garbage collect dictionaries
* make query_target allocate less aggressively; queries fill the remaining points with nulls
* reusable query ops to save memory on huge queries
* move parts of query plans into query ops to save query target memory
* remove storage engine from query metric tiers, to save memory, and recalculate it when it is needed
|
|
|
|
* preparation for /api/v2/contexts
* working /api/v2/contexts
* add anomaly rate information in all statistics; when sum-count is requested, return sums and counts instead of averages
* minor fix
* query targegt now accurately counts hosts, contexts, instances, dimensions, metrics
* cleanup /api/v2/contexts
* full text search with /api/v2/contexts
* simple patterns now support the option to search ignoring case
* full text search API with /api/v2/q
* simple pattern execution optimization
* do not show q when not given
* full text search accounting
* separated /api/v2/nodes from /api/v2/contexts
* fix ssv queries for group_by
* count query instances queried and failed per context and host
* split rrdcontext.c to multiple files
* add query totals
* fix anomaly rate calculation; provide "ni" for indexing hosts
* do not generate zero valued members
* faster calculation of anomaly rate; by just summing integers for each db points and doing math once for every generated point
* fix typo when printing dimensions totals
* added option minify to remove spaces and newlines fron JSON output
* send instance ids and names when they differ
* do not add in query target dimensions, instances, contexts and hosts for which there is no retention in the current timeframe
* fix for the previous + renames and code cleanup
* when a dimension is filtered, include in the response all the other dimensions that are selectable
* do not add nodes that do not have retention in the current window
* move selection of dimensions to query_dimension_add(), instead of query_metric_add()
* increase the pre-processing capacity of queries
* generate instance fqdn ids and names only when they are needed
* provide detailed statistics about tiers retention, queries, points, update_every
* late allocation of query dimensions
* cleanup
* more cleanup
* support for annotations per displayed point, RESET and PARTIAL
* new type annotations
* if a chart is not linked to contexts and it is collected, link it when it is collected
* make ML run reentrant
* make ML rrdr query synchronous
* optimize replication memory allocation of replication_sort_entry
* change units to percentage, when requesting a coefficinet of variation, or a percentage query
* initialize replication before starting main threads
* properly decrement no room requests counter
* propagate the non-zero flag to group-by
* the same by avoiding the extra loop
* respect non-zero in all dimension arrays
* remove dictionary garbage collection from dictionary_entries() and dictionary_version()
* be more verbose when jv2 indexing is postponed
* prevent infinite loop
* use hidden dimensions even when dimensions pattern is unset
* traverse hosts using dictionaries
* fix dictionary unittests
|
|
* make the title metadta the H1
* Update collectors/python.d.plugin/zscores/README.md
* Update libnetdata/ebpf/README.md
* Update ml/README.md
* Update libnetdata/string/README.md
---------
Co-authored-by: Chris Akritidis <43294513+cakrit@users.noreply.github.com>
|
|
* set to wait receive/send when ssl returns wait read/write
* compare the bytes
* set to normal to prevent going into stream mode with incomplete request
* disable wait send
|
|
|
|
* Fix coverity 383236: Resource leak
* Fix coverity 382915 : Logically dead code
* Fix coverity 379133 : Division or modulo by float zero
* Fix coverity 382783 : Copy into fixed size buffer
* Fix coverity 381151 : Missing unlock
* Fix coverity 381903 : Dereference after null check
|
|
optimization (#14493)
* first work on standardizing json formatting
* renamed old grouping to time_grouping and added group_by
* add dummy functions to enable compilation
* buffer json api work
* jsonwrap opening with buffer_json_X() functions
* cleanup
* storage for quotes
* optimize buffer printing for both numbers and strings
* removed ; from define
* contexts json generation using the new json functions
* fix buffer overflow at unit test
* weights endpoint using new json api
* fixes to weights endpoint
* check buffer overflow on all buffer functions
* do synchronous queries for weights
* buffer_flush() now resets json state too
* content type typedef
* print double values that are above the max 64-bit value
* str2ndd() can now parse values above UINT64_MAX
* faster number parsing by avoiding double calculations as much as possible
* faster number parsing
* faster hex parsing
* accurate printing and parsing of double values, even for very large numbers that cannot fit in 64bit integers
* full printing and parsing without using library functions - and related unit tests
* added IEEE754 streaming capability to enable streaming of double values in hex
* streaming and replication to transfer all values in hex
* use our own str2ndd for set2
* remove subnormal check from ieee
* base64 encoding for numbers, instead of hex
* when increasing double precision, also make sure the fractional number printed is aligned to the wanted precision
* str2ndd_encoded() parses all encoding formats, including integers
* prevent uninitialized use
* /api/v1/info using the new json API
* Fix error when compiling with --disable-ml
* Remove redundant 'buffer_unittest' declaration
* Fix formatting
* Fix formatting
* Fix formatting
* fix buffer unit test
* apps.plugin using the new JSON API
* make sure the metrics registry does not accept negative timestamps
* do not allow pages with negative timestamps to be loaded from db files; do not accept pages with negative timestamps in the cache
* Fix more formatting
---------
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
|
|
Add parser in CMakeLists.txt
Keep minimal documentation (overview) and refer to the actual code
|
|
* Move installation before concepts
* Reorder installation links
* Remove single node monitoring
* Reorg libnetdata content
|
|
* Fix coverity issues
382921
382924
382927
382928
382932
382933
382950
382990
383123
382952
382906
382908
382912
382914
382917
382918
382919
* 381508 Unchecked return value
* 382965 Dereference after null check
|
|
* support multiple hosts at pluginsd structures
* cleanup obsolete code
* use a lookup hashtable to quickly find the keyword to execute, without traversing the whole linked list of keywords
* more cleanup
* move new hash function to inlined.h
* minimize comparisons, eliminate a pre-parsing of the first keyword for each line
* cleanup parser from old code
* move parser into libnetdata
* unique entries in parser keywords hashtable
* move all hashing functions to inlined.h, name their sources, simple_hash() now defaults to FNV1a, it was FNV1
* small_hash() for parser
* plugins.d now can switch hosts, and also create/update them
* update hash function and hashtable size
* updated message
* unittest all hashing functions
* reset the chart when setting a new host
* remove host tags
* enable archived hosts when a collector pushes host info
* do not need localhost to swtich to localhost
* disable ARAL and OWA with -DFSANITIZE_ADDRESS=1
|
|
Fixes the issue introduced as a result of #14213, where the agent fails to build successfully on FreeBSD < 13.1 and on environments with Linux kernel version < 5.11, due to missing 'CLOSE_RANGE_CLOEXEC' .
|
|
add proper metadata to the file
|
|
* add metadata for learn
* first batch of adding metadata to md files
* second batch of adding metadata to md files
* third batch of adding metadata to md files
* test one sidebar_label
* add missing sidebar_labels
* add missing sidebar_labels to files left behind
* test, ansible doc is stubborn
* fix
* fix
* fix
* don't use questionmarks in the sidebar label
* don't use exclamation marks and symbols in the sidebar label
* fix style guide
* fixes
* fixes
|
|
* first commit - untested
* fix wrong begin command
* added set v2 too
* debug to log stream buffer
* debug to log stream buffer
* faster streaming printing
* mark charts and dimensions as collected
* use stream points even if sender is not enabled
* comment out stream debug log
* parse null as nan
* custom begin v2
* custom set v2; replication now copies the anomalous flag too
* custom end v2
* enabled stream log test
* renamed to BEGIN2, SET2, END2
* dont mix up replay and v2 members in user object
* fix typo
* cleanup
* support to v2 to v1 proxying
* mark updated dimensions as such
* do not log unknown flags
* comment out stream debug log
* send also the chart id on BEGIN2, v2 to v2
* update the data collections counter
* v2 values are transferred in hex
* faster hex parsing
* a little more generic hex and dec printing and parsing
* fix hex parsing
* minor optimization in dbengine api
* turn debugging into info message
* generalized the timings tracking, so that it can be used in more places
* commented out debug info
* renamed conflicting variable with macro
* remove wrong edits
* integrated ML and added cleanup in case parsing is interrupted
* disable data collection locking during v2
* cleanup stale ML locks; send updated chart variables during v2; add info to find stale locks
* inject an END2 between repeated BEGIN2 from rrdset_done()
* test: remove lockless single-threaded logic from dictionary and aral and apply the right acquire/release memory order to reference counters
* more fine grained dictionary atomics
* remove unecessary return values
* pointer validation under NETDATA_DICTIONARY_VALIDATE_POINTERS
* Revert "pointer validation under NETDATA_DICTIONARY_VALIDATE_POINTERS"
This reverts commit 846cdf2713e2a7ee2ff797f38db11714228800e9.
* Revert "remove unecessary return values"
This reverts commit 8c87d30f4d86f0f5d6b4562cf74fe7447138bbff.
* Revert "more fine grained dictionary atomics"
This reverts commit 984aec4234a340d197d45239ff9a10fd479fcf3c.
* Revert "test: remove lockless single-threaded logic from dictionary and aral and apply the right acquire/release memory order to reference counters"
This reverts commit c460b3d0ad497d2641bd0ea1d63cec7c052e74e4.
* Apply again "pointer validation under NETDATA_DICTIONARY_VALIDATE_POINTERS" while keeping the improved atomic operations.
This reverts commit f158d009
* fix last commit
* fix last commit again
* optimizations in dbengine
* do not send anomaly bit on non-supporting agents (send it when the INTERPOLATED capability is available)
* break long empty-points-loops in rrdset_done()
* decide page alignment on new page allocation, not on every point collected
* create max size pages but no smaller than 1/3
* Fix compilation when --disable-ml is specified
* Return false
* fixes for NETDATA_LOG_REPLICATION_REQUESTS
* added compile option NETDATA_WITHOUT_WORKERS_LATENCY
* put timings in BEGIN2, SET2, END2
* isolate begin2 ml
* revert repositioning data collection lock
* fixed multi-threading of statistics
* do not lookup dimensions all the time if they come in the same order
* update used on iteration, not on every points; also do better error handling
---------
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
|
|
|
|
* Add remaining libnetdata readmes to learn
* Move all libnetdata readmes in learn under Developer/libnetdata libraries except for the main README
* Add the moved array allocator
|
|
Signed-off-by: Tasos Katsoulas <tasos@netdata.cloud>
|
|
* parallel initialization of tiers
* do not spawn multiple dbengine event loops
* user configurable dbengine parallel initialization
* size netdata based on the real cpu cores available on the system netdata runs, not on the system monitored
* user configurable system cpus
* move cpuset parsing to os.c/.h
* fix replication of misaligned chart dimensions
* give a different path to each tier thread
* statically allocate the path into the initialization structure
* use aral for reusing dbengine pages
* dictionaries uses ARAL for fixed sized values
* fix compilation without internal checks
* journal v2 index uses aral
* test to see judy allocations
* judy allocations using aral
* Add config option to select if dbengine will use direct I/O (default is yes)
* V1 journafiles will use uv_fs_read instead of mmap (respect the direct I/O setting)
* Remove sqlite3IsMemdb as it is unused
* Fix compilation error when --disable-dbengine is used
* use aral for dbengine work_cmds
* changed aral API to support new features
* pgc and mrg aral overheads
* rrdeng opcodes using aral
* better structuring and naming
* dbegnine query handles using aral
* page descriptors using aral
* remove obsolete linking
* extent io descriptors using aral
* aral keeps one last page alive
* add missing return value
* added judy aral overhead
* pdc now uses aral
* page_details now use aral
* epdl and deol using aral - make sure ARALs are initialized before spawning the event loop
* remove unused linking
* pgc now uses one aral per partition
* aral measure maximum allocation queue
* aral to allocate pages in parallel
* aral parallel pages allocation when needed
* aral cleanup
* track page allocation and page population separately
---------
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
|
|
* acquiring / releasing interface for metrics
* metrics registry statistics
* cleanup metrics registry by deleting metrics when they dont have retention anymore; do not double copy the data of pages to be flushed
* print the tier in retention summary
* Open files with buffered instead of direct I/O (test)
* added more metrics stats and fixed unittest
* rename writer functions to avoid confusion with refcounting
* do not release a metric that is not acquired
* Revert to use direct I/O on write -- use direct I/O on read as well
* keep track of ARAL overhead and add it to the memory chart
* aral full check via api
* Cleanup
* give names to ARALs and PGCs
* aral improvements
* restore query expansion to the future
* prefer higher resolution tier when switching plans
* added extent read statistics
* smoother joining of tiers at query engine
* fine tune aral max allocation size
* aral restructuring to hide its internals from the rest of netdata
* aral restructuring; addtion of defrag option to aral to keep the linked list sorted - enabled by default to test it
* fully async aral
* some statistics and cleanup
* fix infinite loop while calculating retention
* aral docs and defragmenting disabled by default
* fix bug and add optimization when defragmenter is not enabled
* aral stress test
* aral speed report and documentation
* added internal checks that all pages are full
* improve internal log about metrics deletion
* metrics registry uses one aral per partition
* metrics registry aral max size to 512 elements per page
* remove data_structures/README.md dependency
---------
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
|
|
Revert "Delete libnetdata readme (#14357)"
This reverts commit 6ecfb2892ab373fde4dabb51103b22d9fbaaaa25.
|
|
Add simple patterns
|
|
|
|
Update README.md
|
|
|