summaryrefslogtreecommitdiffstats
path: root/daemon/common.h
AgeCommit message (Collapse)Author
2024-02-06Move daemon/ under src/ (#16933)vkalintiris
2024-01-23DYNCFG: dynamically configured alerts (#16779)Costa Tsaousis
* cleanup alerts * fix references * fix references * fix references * load alerts once and apply them to each node * simplify health_create_alarm_entry() * Compile without warnings with compiler flags: -Wall -Wextra -Wformat=2 -Wshadow -Wno-format-nonliteral -Winit-self * code re-organization and cleanup * generate patterns when applying prototypes; give unique dyncfg names to all alerts * eval expressions keep the source and the parsed_as as STRING pointers * renamed host to node in dyncfg ids * renamed host to node in dyncfg ids * add all cloud roles to the list of parsed X-Netdata-Role header and also default to member access level * working functionality * code re-organization: moved health event-loop to a new file, moved health globals to health.c * rrdcalctemplate is removed; alert_cfg is removed; foreach dimension is removed; RRDCALCs are now instanciated only when they are linked to RRDSETs * dyncfg alert prototypes initialization for alerts * health dyncfg split to separate file * cleanup not-needed code * normalize matches between parsing and json * also detect !* for disabled alerts * dyncfg capability disabled * Store alert config part1 * Add rrdlabels_common_count * wip health variables lookup without indexes * Improve rrdlabels_common_count by reusing rrdlabels_find_label_with_key_unsafe with an additional parameter * working variables with runtime lookup * working variables with runtime lookup * delete rrddimvar and rrdfamily index * remove rrdsetvar; now all variables are in RRDVARs inside hosts and charts * added /api/v1/variable that resolves a variable the same way alerts do * remove rrdcalc from eval * remove debug code * remove duplicate assignment * Fix memory leak * all alert variables are now handled by alert_variable_lookup() and EVAL is now independent of alerts * hide all internal structures of EVAL * Enable -Wformat flag Signed-off-by: Tasos Katsoulas <tasos@netdata.cloud> * Adjust binding for calculation, warning, critical * Remove unused macro * Update config hash id * use the right info and summary in alerts log * use synchronous queries for alerts * Handle cases when config_hash_id is missing from health_log * remove deadlock from health worker * parsing to json payload for health alert prototypes * cleaner parsing and avoiding memory leaks in case of duplicate members in json * fix left-over rename of function * Keep original lookup field to send to the cloud Cleanup / rename function to store config Remove unused DEFINEs, functions * Use ac->lookup * link jobs to the host when the template is registered; do not accept running a function without a host * full dyncfg support for health alerts, except action TEST * working dyncfg additions, updates, removals * fixed missing source, wrong status updates * add alerts by type, component, classification, recipient and module at the /api/v2/alerts endpoint * fix dyncfg unittest * rename functions * generalize the json-c parser macros and move them to libnetdata * report progress when enabling and disabling dyncfg templates * moved rrdcalc and rrdvar to health * update alarms * added schema for alerts; separated alert_action_options from rrdr_options; restructured the json payload for alerts * enable parsed json alerts; allow sending back accepted but disabled * added format_version for alerts payload; enables/disables status now is also inheritted by the status of the rules; fixed variable names in json output * remove the RRDHOST pointer from DYNCFG * Fix command field submitted to the cloud * do not send updates to creation requests, for DYNCFG jobs --------- Signed-off-by: Tasos Katsoulas <tasos@netdata.cloud> Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com> Co-authored-by: Tasos Katsoulas <tasos@netdata.cloud> Co-authored-by: ilyam8 <ilya@netdata.cloud>
2024-01-11dyncfg v2 (#16702)Costa Tsaousis
* split rrdfunctions streaming and progress * simplified internal inline functions API * split rrdfunctions inflight management * split rrd functions exporters * renames * base dyncfg structure * config pluginsd * intercept dyncfg function calls * loading and saving of dyncfg metadata and data * save metadata and payload to a single file; added code to update the plugins with jobs and saved configs * basic working unit test * added payload to functions execution * removed old dyncfg code that is not needed any more * more cleanup * cleanup sender for functions with payload * dyncfg functions are not exposed as functions * remaining work to avoid indexing the \0 terminating character in dictionary keys * added back old dyncfg plugins.d commands as noop, to allow plugins continue working * working api; working streaming; * updated plugins.d documentation * aclk and http api requests share the same header parsing logic * added source type internal * fixed crashes * added god mode for tests * fixes * fixed messages * save host machine guids to configs * cleaner manipulation of supported commands * the functions event loop for external plugins can now process dyncfg requests * unified internal and external plugins dyncfg API * Netdata serves schema requests from /etc/netdata/schema.d and /var/lib/netdata/conf.d/schema.d * cleanup and various fixes; fixed bug in previous dyncfg implementation on streaming that was sending the paylod in a way that allowed other streaming commands to be multiplexed * internals go to a separate header file * fix duplicate ACLK requests sent by aclk queue mechanism * use fstat instead of stat * working api * plugin actions renamed to create and delete; dyncfg files are removed only from user actions * prevent deadlock by using the react callback * fix for string_strndupz() * better dyncfg unittests * more tests at the unittests * properly detect dyncfg functions * hide config functions from the UI * tree response improvements * send the initial update with payload * determine tty using stdout, not stderr * changes to statuses, cleanup and the code to bring all business logic into interception * do not crash when the status is empty * functions now propagate the source of the requests to plugins * avoid warning about unused functions * in the count at items for attention, do not count the orphan entries * save source into dyncfg * make the list null terminated * fixed invalid comparison * prevent memory leak on duplicated headers; log x-forwarded-for * more unit tests * added dyncfg unittests into the default unittests * more unit tests and fixes * more unit tests and fixes * fix dictionary unittests * config functions require admin access
2023-11-28proc_net_dev: remove device config section (#16492)Ilya Mashchenko
2023-07-06Code reorg and cleanup - enrichment of /api/v2 (#15294)Costa Tsaousis
* claim script now accepts the same params as the kickstart * rewrote buildinfo to unify all methods * added cloud unavailable in cloud status * added all exporters * renamed httpd to h2o * rename ENABLE_COMPRESSION to ENABLE_LZ4 * rename global variable * rename ENABLE_HTTPS to ENABLE_OPENSSL * fix coverity-scan for openssl * add lz4 to coverity-scan * added all plugins and most of the features * added all plugins and most of the features * generalize bitmap code so that we can have any size of bitmaps * cleanup * fix compilation without protobuf * fix compilation with others allocators * fix bitmap * comprehensive bitmaps unit test * bitmap as macros * added developer mode * added system info to build info * cloud available/unavailable * added /api/v2/info * added units and ni to transitions * when showing instances and transitions, show only the instances that have transitions * cleanup * add missing quotes * add anchor to transitions * added more to build info * calculate retention per tier and expose it to /api/v2/info * added currently collected metrics * do not show space and retention when no numbers are available * fix impossible overflow * Add function for transitions and execute callback * In case of error, reset and try next dictionary entry * Fix error message * simpler logic to maintain retention per tier * /api/v2/alert_transitions * Handle case of recipient null Convert after and before to usec * Add classification, type and component * working /api/v2/alert_transitions * Fix query to properly handle context and alert name * cleanup * Add search with transition * accept transition in /api/v2/alert_transitions * totaly dynamic facets * fixed debug info * restructured facets * cleanup; removal of options=transitions * updated alert entries flags * method to exec * Return also exec run timestamp Temp table cleanup only when we don't execute with a transition * cleanup obsolete anchor parameter * Add sql_get_alert_configuration function * added options=config to alert_transitions * added /api/v2/alert_config * preliminary work for /api/v2/claim * initialize variables; do not expose expected retention if no disk space info is available; do not report aclk as initializing when not claimed * fix claim session key filename * put a newline into the session key file * more progress on claiming * final /api/v2/claim endpoint * after claiming, refresh our state at the output * Fix query to fetch config * Remove debug log * add configuration objects * add configuration objects - fixed * respect the NETDATA_DISABLE_CLOUD env variable * NETDATA_DISABLE_CLOUD env variable sets the default, but the config sets the final value * use a new claimed_id on every claiming * regenerate random key on claiming and wait for online status * ignore write() return value when writing a newline * dont show cloud status disabled when claimed_id is missing * added ctx to alert instances * cleanup config and transitions from /api/v2/alerts * fix unused variable * in /api/v2/alert_config show 1 config without an array * show alert values conditionally, by appending options=values * When storing host info if the key value is empty, store unknown * added options=summary to control when the alerts summary is shown * increased http_api_v2 to version 5 * claming random key file is now not world readable * added local-listeners binary that detects all the listening ports, their IPs and their command lines --------- Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
2023-06-21Allow overriding pipename from env (#15215)vkalintiris
This in turn will allow us to target specific agent processes running with the commands cli.
2023-06-19/api/v2/nodes and streaming function (#15168)Costa Tsaousis
* dummy streaming function * expose global functions upstream * separate function for pushing global functions * add missing conditions * allow streaming function to run async * started internal API for functions * cache host retention and expose it to /api/v2/nodes * internal API for function table fields; more progress on streaming status * abstracted and unified rrdhost status * port old coverity warning fix - although it is not needed * add ML information to rrdhost status * add ML capability to streaming to signal the transmission of ML information; added ML information to host status * protect host->receiver * count metrics and instances per host * exposed all inbound and outbound streaming * fix for ML status and dependency of DATA_WITH_ML to INTERPOLATED, not IEEE754 * update ML dummy * added all fields * added streaming group by and cleaned up accepted values by cloud * removed type * Revert "removed type" This reverts commit faae4177e603d4f85b7433f33f92ef3ccd23976e. * added context to db summary * new /api/v2/nodes schema * added ML type * change default function charts * log to trace new capa * add more debug * removed debugging code * retry on receive interrupted read; respect sender reconnect delay in all cases * set disconnected host flag and manipulate localhost child count atomically, inside set/clear receiver * fix infinite loop * send_to_plugin() now has a spinlock to ensure that only 1 thread is writing to the plugin/child at the same time * global cloud_status() call * cloud should be a section, since it will contain error information * put cloud capabilities into cloud * aclk status in /api/v2 agents sections * keep aclk_connection_counter * updates on /api/v2/nodes * final /api/v2/nodes and addition of /api/v2/nodes_instances * parametrize all /api/v2/xxx output to control which info is outputed per endpoint * always accept nodes selector * st needs to be per instance, not per node * fix merging of contexts; fix cups plugin priorities * add after and before parameters to /api/v2/contexts/nodes/nodes_instances/q * give each libuv worker a unique id * aclk http_api_v2 version 4
2023-05-10initial minimal h2o webserver integration (#14585)Timotej S
Introduces h2o based web server as an alternative
2023-04-20WEBRTC for communication between agents and browsers (#14874)Costa Tsaousis
* initial webrtc setup * missing files * rewrite of webrtc integration * initialization and cleanup of webrtc connections * make it compile without libdatachannel * add missing webrtc_initialize() function when webrtc is not enabled * make c++17 optional * add build/m4/ax_compiler_vendor.m4 * add ax_cxx_compile_stdcxx.m4 * added new m4 files to makefile.am * id all webrtc connections * show warning when webrtc is disabled * fixed message * moved all webrtc error checking inside webrtc.cpp * working webrtc connection establishment and cleanup * remove obsolete code * rewrote webrtc code in C to remove dependency for c++17 * fixed left-over reference * detect binary and text messages * minor fix * naming of webrtc threads * added webrtc configuration * fix for thread_get_name_np() * smaller web_client memory footprint * universal web clients cache * free web clients every 100 uses * webrtc is now enabled by default only when compiled with internal checks * webrtc responses to /api/ requests, including LZ4 compression * fix for binary and text messages * web_client_cache is now global * unification of the internal web server API, for web requests, aclk request, webrtc requests * more cleanup and unification of web client timings * fixed compiler warnings * update sent and received bytes * eliminated of almost all big buffers in web client * registry now uses the new json generation * cookies are now an array; fixed redirects * fix redirects, again * write cookies directly to the header buffer, eliminating the need for cookie structures in web client * reset the has_cookies flag * gathered all web client cleanup to one function * fixes redirects * added summary.globals in /api/v2/data response * ars to arc in /api/v2/data * properly handle host impersonation * set the context of mem.numa_nodes
2023-02-22/api/v2/data - multi-host/context/instance/dimension/label queries (#14564)Costa Tsaousis
* fundamentals for having /api/v2/ working * use an atomic to prevent writing to internal pipe too much * first attempt of multi-node, multi-context, multi-chart, multi-dimension queries * v2 jsonwrap * first attempt for group by * cleaned up RRDR and fixed group by * improvements to /api/v2/api * query instance may be realloced, so pointers to it get invalid; solved memory leaks * count of quried metrics in summary information * provide detailed information about selected, excluded, queried and failed metrics for each entity * select instances by fqdn too * add timing information to json output * link charts to rrdcontexts, if a query comes in and it is found unlinked * calculate min, max, sum, average, volume, count per metric * api v2 parameters naming * renders alerts and units * render machine_guid and node_id in all sections it is relevant * unified keys * group by now takes into account units and when there are multiple units involved, it creates a dimension per unit * request and detailed are hidden behind an option * summary includes only a flattened list of alerts * alert counts per host and instance * count of grouped metrics per dimension * added contexts to summary * added chart title * added dimension priorities and chart type * support for multiple group by at the same time * minor fixes * labels are now a tree * keys uniformity * filtering by alerts, both having a specific alert and having a specific alert in a specific status * added scope of hosts and contexts * count of instances on contexts and hosts * make the api return valid responses even when the response contains no data * calculate average and contribution % for every item in the summary * fix compilation warnings * fix compilation warnings - again
2023-02-02DBENGINE v2 - improvements part 12 (#14379)Costa Tsaousis
* parallel initialization of tiers * do not spawn multiple dbengine event loops * user configurable dbengine parallel initialization * size netdata based on the real cpu cores available on the system netdata runs, not on the system monitored * user configurable system cpus * move cpuset parsing to os.c/.h * fix replication of misaligned chart dimensions * give a different path to each tier thread * statically allocate the path into the initialization structure * use aral for reusing dbengine pages * dictionaries uses ARAL for fixed sized values * fix compilation without internal checks * journal v2 index uses aral * test to see judy allocations * judy allocations using aral * Add config option to select if dbengine will use direct I/O (default is yes) * V1 journafiles will use uv_fs_read instead of mmap (respect the direct I/O setting) * Remove sqlite3IsMemdb as it is unused * Fix compilation error when --disable-dbengine is used * use aral for dbengine work_cmds * changed aral API to support new features * pgc and mrg aral overheads * rrdeng opcodes using aral * better structuring and naming * dbegnine query handles using aral * page descriptors using aral * remove obsolete linking * extent io descriptors using aral * aral keeps one last page alive * add missing return value * added judy aral overhead * pdc now uses aral * page_details now use aral * epdl and deol using aral - make sure ARALs are initialized before spawning the event loop * remove unused linking * pgc now uses one aral per partition * aral measure maximum allocation queue * aral to allocate pages in parallel * aral parallel pages allocation when needed * aral cleanup * track page allocation and page population separately --------- Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
2023-01-10DBENGINE v2 (#14125)Costa Tsaousis
* count open cache pages refering to datafile * eliminate waste flush attempts * remove eliminated variable * journal v2 scanning split functions * avoid locking open cache for a long time while migrating to journal v2 * dont acquire datafile for the loop; disable thread cancelability while a query is running * work on datafile acquiring * work on datafile deletion * work on datafile deletion again * logs of dbengine should start with DBENGINE * thread specific key for queries to check if a query finishes without a finalize * page_uuid is not used anymore * Cleanup judy traversal when building new v2 Remove not needed calls to metric registry * metric is 8 bytes smaller; timestamps are protected with a spinlock; timestamps in metric are now always coherent * disable checks for invalid time-ranges * Remove type from page details * report scanning time * remove infinite loop from datafile acquire for deletion * remove infinite loop from datafile acquire for deletion again * trace query handles * properly allocate array of dimensions in replication * metrics cleanup * metrics registry uses arrayalloc * arrayalloc free should be protected by lock * use array alloc in page cache * journal v2 scanning fix * datafile reference leaking hunding * do not load metrics of future timestamps * initialize reasons * fix datafile reference leak * do not load pages that are entirely overlapped by others * expand metric retention atomically * split replication logic in initialization and execution * replication prepare ahead queries * replication prepare ahead queries fixed * fix replication workers accounting * add router active queries chart * restore accounting of pages metadata sources; cleanup replication * dont count skipped pages as unroutable * notes on services shutdown * do not migrate to journal v2 too early, while it has pending dirty pages in the main cache for the specific journal file * do not add pages we dont need to pdc * time in range re-work to provide info about past and future matches * finner control on the pages selected for processing; accounting of page related issues * fix invalid reference to handle->page * eliminate data collection handle of pg_lookup_next * accounting for queries with gaps * query preprocessing the same way the processing is done; cache now supports all operations on Judy * dynamic libuv workers based on number of processors; minimum libuv workers 8; replication query init ahead uses libuv workers - reserved ones (3) * get into pdc all matching pages from main cache and open cache; do not do v2 scan if main cache and open cache can satisfy the query * finner gaps calculation; accounting of overlapping pages in queries * fix gaps accounting * move datafile deletion to worker thread * tune libuv workers and thread stack size * stop netdata threads gradually * run indexing together with cache flush/evict * more work on clean shutdown * limit the number of pages to evict per run * do not lock the clean queue for accesses if it is not possible at that time - the page will be moved to the back of the list during eviction * economies on flags for smaller page footprint; cleanup and renames * eviction moves referenced pages to the end of the queue * use murmur hash for indexing partition * murmur should be static * use more indexing partitions * revert number of partitions to number of cpus * cancel threads first, then stop services * revert default thread stack size * dont execute replication requests of disconnected senders * wait more time for services that are exiting gradually * fixed last commit * finer control on page selection algorithm * default stacksize of 1MB * fix formatting * fix worker utilization going crazy when the number is rotating * avoid buffer full due to replication preprocessing of requests * support query priorities * add count of spins in spinlock when compiled with netdata internal checks * remove prioritization from dbengine queries; cache now uses mutexes for the queues * hot pages are now in sections judy arrays, like dirty * align replication queries to optimal page size * during flushing add to clean and evict in batches * Revert "during flushing add to clean and evict in batches" This reverts commit 8fb2b69d068499eacea6de8291c336e5e9f197c7. * dont lock clean while evicting pages during flushing * Revert "dont lock clean while evicting pages during flushing" This reverts commit d6c82b5f40aeba86fc7aead062fab1b819ba58b3. * Revert "Revert "during flushing add to clean and evict in batches"" This reverts commit ca7a187537fb8f743992700427e13042561211ec. * dont cross locks during flushing, for the fastest flushes possible * low-priority queries load pages synchronously * Revert "low-priority queries load pages synchronously" This reverts commit 1ef2662ddcd20fe5842b856c716df134c42d1dc7. * cache uses spinlock again * during flushing, dont lock the clean queue at all; each item is added atomically * do smaller eviction runs * evict one page at a time to minimize lock contention on the clean queue * fix eviction statistics * fix last commit * plain should be main cache * event loop cleanup; evictions and flushes can now happen concurrently * run flush and evictions from tier0 only * remove not needed variables * flushing open cache is not needed; flushing protection is irrelevant since flushing is global for all tiers; added protection to datafiles so that only one flusher can run per datafile at any given time * added worker jobs in timer to find the slow part of it * support fast eviction of pages when all_of_them is set * revert default thread stack size * bypass event loop for dispatching read extent commands to workers - send them directly * Revert "bypass event loop for dispatching read extent commands to workers - send them directly" This reverts commit 2c08bc5bab12881ae33bc73ce5dea03dfc4e1fce. * cache work requests * minimize memory operations during flushing; caching of extent_io_descriptors and page_descriptors * publish flushed pages to open cache in the thread pool * prevent eventloop requests from getting stacked in the event loop * single threaded dbengine controller; support priorities for all queries; major cleanup and restructuring of rrdengine.c * more rrdengine.c cleanup * enable db rotation * do not log when there is a filter * do not run multiple migration to journal v2 * load all extents async * fix wrong paste * report opcodes waiting, works dispatched, works executing * cleanup event loop memory every 10 minutes * dont dispatch more work requests than the number of threads available * use the dispatched counter instead of the executing counter to check if the worker thread pool is full * remove UV_RUN_NOWAIT * replication to fill the queues * caching of extent buffers; code cleanup * caching of pdc and pd; rework on journal v2 indexing, datafile creation, database rotation * single transaction wal * synchronous flushing * first cancel the threads, then signal them to exit * caching of rrdeng query handles; added priority to query target; health is now low prio * add priority to the missing points; do not allow critical priority in queries * offload query preparation and routing to libuv thread pool * updated timing charts for the offloaded query preparation * caching of WALs * accounting for struct caches (buffers); do not load extents with invalid sizes * protection against memory booming during replication due to the optimal alignment of pages; sender thread buffer is now also reset when the circular buffer is reset * also check if the expanded before is not the chart later updated time * also check if the expanded before is not after the wall clock time of when the query started * Remove unused variable * replication to queue less queries; cleanup of internal fatals * Mark dimension to be updated async * caching of extent_page_details_list (epdl) and datafile_extent_offset_list (deol) * disable pgc stress test, under an ifdef * disable mrg stress test under an ifdef * Mark chart and host labels, host info for async check and store in the database * dictionary items use arrayalloc * cache section pages structure is allocated with arrayalloc * Add function to wakeup the aclk query threads and check for exit Register function to be called during shutdown after signaling the service to exit * parallel preparation of all dimensions of queries * be more sensitive to enable streaming after replication * atomically finish chart replication * fix last commit * fix last commit again * fix last commit again again * fix last commit again again again * unify the normalization of retention calculation for collected charts; do not enable streaming if more than 60 points are to be transferred; eliminate an allocation during replication * do not cancel start streaming; use high priority queries when we have locked chart data collection * prevent starvation on opcodes execution, by allowing 2% of the requests to be re-ordered * opcode now uses 2 spinlocks one for the caching of allocations and one for the waiting queue * Remove check locks and NETDATA_VERIFY_LOCKS as it is not needed anymore * Fix bad memory allocation / cleanup * Cleanup ACLK sync initialization (part 1) * Don't update metric registry during shutdown (part 1) * Prevent crash when dashboard is refreshed and host goes away * Mark ctx that is shutting down. Test not adding flushed pages to open cache as hot if we are shutting down * make ML work * Fix compile without NETDATA_INTERNAL_CHECKS * shutdown each ctx independently * fix completion of quiesce * do not update shared ML charts * Create ML charts on child hosts. When a parent runs a ML for a child, the relevant-ML charts should be created on the child host. These charts should use the parent's hostname to differentiate multiple parents that might run ML for a child. The only exception to this rule is the training/prediction resource usage charts. These are created on the localhost of the parent host, because they provide information specific to said host. * check new ml code * first save the database, then free all memory * dbengine prep exit before freeing all memory; fixed deadlock in cache hot to dirty; added missing check to query engine about metrics without any data in the db * Cleanup metadata thread (part 2) * increase refcount before dispatching prep command * Do not try to stop anomaly detection threads twice. A separate function call has been added to stop anomaly detection threads. This commit removes the left over function calls that were made internally when a host was being created/destroyed. * Remove allocations when smoothing samples buffer The number of dims per sample is always 1, ie. we are training and predicting only individual dimensions. * set the orphan flag when loading archived hosts * track worker dispatch callbacks and threadpool worker init * make ML threads joinable; mark ctx having flushing in progress as early as possible * fix allocation counter * Cleanup metadata thread (part 3) * Cleanup metadata thread (part 4) * Skip metadata host scan when running unittest * unittest support during init * dont use all the libuv threads for queries * break an infinite loop when sleep_usec() is interrupted * ml prediction is a collector for several charts * sleep_usec() now makes sure it will never loop if it passes the time expected; sleep_usec() now uses nanosleep() because clock_nanosleep() misses signals on netdata exit * worker_unregister() in netdata threads cleanup * moved pdc/epdl/deol/extent_buffer related code to pdc.c and pdc.h * fixed ML issues * removed engine2 directory * added dbengine2 files in CMakeLists.txt * move query plan data to query target, so that they can be exposed by in jsonwrap * uniform definition of query plan according to the other query target members * event_loop should be in daemon, not libnetdata * metric_retention_by_uuid() is now part of the storage engine abstraction * unify time_t variables to have the suffix _s (meaning: seconds) * old dbengine statistics become "dbengine io" * do not enable ML resource usage charts by default * unify ml chart families, plugins and modules * cleanup query plans from query target * cleanup all extent buffers * added debug info for rrddim slot to time * rrddim now does proper gap management * full rewrite of the mem modes * use library functions for madvise * use CHECKSUM_SZ for the checksum size * fix coverity warning about the impossible case of returning a page that is entirely in the past of the query * fix dbengine shutdown * keep the old datafile lock until a new datafile has been created, to avoid creating multiple datafiles concurrently * fine tune cache evictions * dont initialize health if the health service is not running - prevent crash on shutdown while children get connected * rename AS threads to ACLK[hostname] * prevent re-use of uninitialized memory in queries * use JulyL instead of JudyL for PDC operations - to test it first * add also JulyL files * fix July memory accounting * disable July for PDC (use Judy) * use the function to remove datafiles from linked list * fix july and event_loop * add july to libnetdata subdirs * rename time_t variables that end in _t to end in _s * replicate when there is a gap at the beginning of the replication period * reset postponing of sender connections when a receiver is connected * Adjust update every properly * fix replication infinite loop due to last change * packed enums in rrd.h and cleanup of obsolete rrd structure members * prevent deadlock in replication: replication_recalculate_buffer_used_ratio_unsafe() deadlocking with replication_sender_delete_pending_requests() * void unused variable * void unused variables * fix indentation * entries_by_time calculation in VD was wrong; restored internal checks for checking future timestamps * macros to caclulate page entries by time and size * prevent statsd cleanup crash on exit * cleanup health thread related variables Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com> Co-authored-by: vkalintiris <vasilis@netdata.cloud>
2022-08-24Remove aclk_api.[ch] (#13540)Timotej S
* get rid of aclk_starter middleman * get rid of aclk_api.[ch]
2022-08-01/api/v1/weights endpoint (#13449)Costa Tsaousis
* /api/v1/weights endpoints * high resolution anomaly rate in parallel with queries; points and options in /api/v1/weights reflect the truth * context printing * merged metric_correlations with weights API; added parameter tier to select the tier to run the query; weight api now returns points per tier; added swagger info about weights api * moved metric_correlations files to web/api/queries as weights * added contexts filtering; renamed correlated_dimensions; weights API is always enabled; code cleanup * allow returning zero results
2022-05-04Metric correlations (#12582)Emmanuel Vasilakis
* initial attempt at metric correlations * fix loop * simplify struct * change json * get points from query * comment * dont lock the host as much * add a configuration option to enable/disable metric correlations * remove KSfbar from header file * lock charts * add timeout * cast multiplication * add licencing info * better licencing * use onewayalloc * destroy owa
2022-04-25fix(cgroups.plugin): remove "enable cgroup X" config option on cgroup ↵Ilya Mashchenko
deletion (#12746)
2022-03-15Remove backends subsystem (#12146)Vladimir Kobal
2022-01-19Compute platform-specific list of static_threads at runtime. (#11955)vkalintiris
Compute array of static threads at runtime.
2021-10-27Anomaly Detection MVP (#11548)vkalintiris
* Add support for feature extraction and K-Means clustering. This patch adds support for performing feature extraction and running the K-Means clustering algorithm on the extracted features. We use the open-source dlib library to compute the K-Means clustering centers, which has been added as a new git submodule. The build system has been updated to recognize two new options: 1) --enable-ml: build an agent with ml functionality, and 2) --enable-ml-tests: support running tests with the `-W mltest` option in netdata. The second flag is meant only for internal use. To build tests successfully, you need to install the GoogleTest framework on your machine. * Boilerplate code to track hosts/dims and init ML config options. A new opaque pointer field is added to the database's host and dimension data structures. The fields point to C++ wrapper classes that will be used to store ML-related information in follow-up patches. The ML functionality needs to iterate all tracked dimensions twice per second. To avoid locking the entire DB multiple times, we use a separate dictionary to add/remove dimensions as they are created/deleted by the database. A global configuration object is initialized during the startup of the agent. It will allow our users to specify ML-related configuration options, eg. hosts/charts to skip from training, etc. * Add support for training and prediction of dimensions. Every new host spawns a training thread which is used to train the model of each dimension. Training of dimensions is done in a non-batching mode in order to avoid impacting the generated ML model by the CPU, RAM and disk utilization of the training code itself. For performance reasons, prediction is done at the time a new value is pushed in the database. The alternative option, ie. maintaining a separate thread for prediction, would be ~3-4x times slower and would increase locking contention considerably. For similar reasons, we use a custom function to unpack storage_numbers into doubles, instead of long doubles. * Add data structures required by the anomaly detector. This patch adds two data structures that will be used by the anomaly detector in follow-up patches. The first data structure is a circular bit buffer which is being used to count the number of set bits over time. The second data structure represents an expandable, rolling window that tracks set/unset bits. It is explicitly modeled as a finite-state machine in order to make the anomaly detector's behaviour easier to test and reason about. * Add anomaly detection thread. This patch creates a new anomaly detection thread per host. Each thread maintains a BitRateWindow which is updated every second based on the anomaly status of the correspondent host. Based on the updated status of the anomaly window, we can identify the existence/absence of an anomaly event, it's start/end time and the dimensions that participate in it. * Create/insert/query anomaly events from Sqlite DB. * Create anomaly event endpoints. This patch adds two endpoints to expose information about anomaly events. The first endpoint returns the list of anomalous events within a specified time range. The second endpoint provides detailed information about a single anomaly event, ie. the list of anomalous dimensions in that event along with their anomaly rate. The `anomaly-bit` option has been added to the `/data` endpoint in order to allow users to get the anomaly status of individual dimensions per second. * Fix build failures on Ubuntu 16.04 & CentOS 7. These distros do not have toolchains with C++11 enabled by default. Replacing nullptr with NULL should be fix the build problems on these platforms when the ML feature is not enabled. * Fix `make dist` to include ML makefiles and dlib sources. Currently, we add ml/kmeans/dlib to EXTRA_DIST. We might want to generate an explicit list of source files in the future, in order to bring down the generated archive's file size. * Small changes to make the LGTM & Codacy bots happy. - Cast unused result of function calls to void. - Pass a const-ref string to Database's constructor. - Reduce the scope of a local variable in the anomaly detector. * Add user configuration option to enable/disable anomaly detection. * Do not log dimension-specific operations. Training and prediction operations happen every second for each dimension. In prep for making this PR easier to run anomaly detection for many charts & dimensions, I've removed logs that would cause log flooding. * Reset dimensions' bit counter when not above anomaly rate threshold. * Update the default config options with real values. With this patch the default configuration options will match the ones we want our users to use by default. * Update conditions for creating new ML dimensions. 1. Skip dimensions with update_every != 1, 2. Skip dimensions that come from the ML charts. With this filtering in place, any configuration value for the relevant simple_pattern expressions will work correctly. * Teach buildinfo{,json} about the ML feature. * Set --enable-ml by default in the configuration options. This patch is only meant for testing the building of the ML functionality on Github. It will be reverted once tests pass successfully. * Minor build system fixes. - Add path to json header - Enable C++ linker when ML functionality is enabled - Rename ml/ml-dummy.cc to ml/ml-dummy.c * Revert "Set --enable-ml by default in the configuration options." This reverts commit 28206952a59a577675c86194f2590ec63b60506c. We pass all Github checks when building the ML functionality, except for those that run on CentOS 7 due to not having a C++11 toolchain. * Check for missing dlib and nlohmann files. We simply check the single-source files upon which our build system depends. If they are missing, an error message notifies the user about missing git submodules which are required for the ML functionality. * Allow users to specify the maximum number of KMeans iterations. * Use dlib v19.10 v19.22 broke compatibility with CentOS 7's g++. Development of the anomaly detection used v19.10, which is the version used by most Debian and Ubuntu distribution versions that are not past EOL. No observable performance improvements/regressions specific to the K-Means algorithm occur between the two versions. * Detect and use the -std=c++11 flag when building anomaly detection. This patch automatically adds the -std=c++11 when building netdata with the ML functionality, if it's supported by the user's toolchain. With this change we are able to build the agent correctly on CentOS 7. * Restructure configuration options. - update default values, - clamp values to min/max defaults, - validate and identify conflicting values. * Add update_every configuration option. Considerring that the MVP does not support per host configuration options, the update_every option will be used to filter hosts to train. With this change anomaly detection will be supported on: - Single nodes with update_every != 1, and - Children nodes with a common update_every value that might differ from the value of the parent node. * Reorganize anomaly detection charts. This follows Andrew's suggestion to have four charts to show the number of anomalous/normal dimensions, the anomaly rate, the detector's window length, and the events that occur in the prediction step. Context and family values, along with the necessary information in the dashboard_info.js file, will be updated in a follow-up commit. * Do not dump anomaly event info in logs. * Automatically handle low "train every secs" configuration values. If a user specifies a very low value for the "train every secs", then it is possible that the time it takes to train a dimension is higher than the its allotted time. In that case, we want the training thread to: - Reduce it's CPU usage per second, and - Allow the prediction thread to proceed. We achieve this by limiting the training time of a single dimension to be equal to half the time allotted to it. This means, that the training thread will never consume more than 50% of a single core. * Automatically detect if ML functionality should be enabled. With these changes, we enable ML if: - The user has not explicitly specified --disable-ml, and - Git submodules have been checked out properly, and - The toolchain supports C++11. If the user has explicitly specified --enable-ml, the build fails if git submodules are missing, or the toolchain does not support C++11. * Disable anomaly detection by default. * Do not update charts in locked region. * Cleanup code reading configuration options. * Enable C++ linker when building ML. * Disable ML functionality for CMake builds. * Skip LGTM for dlib and nlohmann libraries. * Do not build ML if libuuid is missing. * Fix dlib path in LGTM's yaml config file. * Add chart to track duration of prediction step. * Add chart to track duration of training step. * Limit the number dimensions in an anomaly event. This will ensure our JSON results won't grow without any limit. The default ML configuration options, train approximately ~1700 dimensions in a newly-installed Netdata agent. The hard-limit is set to 2000 dimensions which: - Is well above the default number of dimensions we train, - If it is ever reached it means that the user had accidentaly a very low anomaly rate threshold, and - Considering that we sort the result by anomaly score, the cutoff dimensions will be the less anomalous, ie. the least important to investigate. * Add information about the ML charts. * Update family value in ML charts. This fix will allow us to show the individual charts in the RHS Anomaly Detection submenu. * Rename chart type s/anomalydetection/anomaly_detection/g * Expose ML feat in /info endpoint. * Export ML config through /info endpoint. * Fix CentOS 7 build. * Reduce the critical region of a host's lock. Before this change, each host had a single, dedicated lock to protect its map of dimensions from adding/deleting new dimensions while training and detecting anomalies. This was problematic because training of a single dimension can take several seconds in nodes that are under heavy load. After this change, the host's lock protects only the insertion/deletion of new dimensions, and the prediction step. For the training of dimensions we use a dedicated lock per dimension, which is responsible for protecting the dimension from deletion while training. Prediction is fast enough, even on slow machines or under heavy load, which allows us to use the host's main lock and avoid increasing the complexity of our implementation in the anomaly detector. * Improve the way we are tracking anomaly detector's performance. This change allows us to: - track the total training time per update_every period, - track the maximum training time of a single dimension per update_every period, and - export the current number of total, anomalous, normal dimensions to the /info endpoint. Also, now that we use dedicated locks per dimensions, we can train under heavy load continuously without having to sleep in order to yield the training thread and allow the prediction thread to progress. * Use samples instead of seconds in ML configuration. This commit changes the way we are handling input ML configuration options from the user. Instead of treating values as seconds, we interpret all inputs as number of update_every periods. This allows us to enable anomaly detection on hosts that have update_every != 1 second, and still produce a model for training/prediction & detection that behaves in an expected way. Tested by running anomaly detection on an agent with update_every = [1, 2, 4] seconds. * Remove unecessary log message in detection thread * Move ML configuration to global section. * Update web/gui/dashboard_info.js Co-authored-by: Andrew Maguire <andrewm4894@gmail.com> * Fix typo Co-authored-by: Andrew Maguire <andrewm4894@gmail.com> * Rebase. * Use negative logic for anomaly bit. * Add info for prediction_stats and training_stats charts. * Disable ML on PPC64EL. The CI test fails with -std=c++11 and requires -std=gnu++11 instead. However, it's not easy to quickly append the required flag to CXXFLAGS. For the time being, simply disable ML on PPC64EL and if any users require this functionality we can fix it in the future. * Add comment on why we disable ML on PPC64EL. Co-authored-by: Andrew Maguire <andrewm4894@gmail.com>
2021-07-19Move cleanup of obsolete charts to a separate thread (#11222)Vladimir Kobal
2021-06-14Fixes error on --disable-cloud (#11244)Timotej S
always include aclk_api.h
2021-06-14Allows ACLK NG and Legacy to coexist (#11225)Timotej S
2021-05-31Provide UTC offset in seconds and edit health config command (#11051)Emmanuel Vasilakis
* add abbreviated timezone, utc offset in seconds, and edit health alarm command rebased * formating * use str2i instead of atoi
2021-05-24Remove unecessary relative paths when including headers. (#11124)vkalintiris
Currently, we add the repository's top-level dir in the compiler's header search path. This means that code in every top-level directory within the repo can include headers sibling top-level directories. This patch makes header inclusion consistent when it comes to files that are included from sibling top-level directories within the repo.
2021-04-27Provide more agent analytics to posthog (#11020)Emmanuel Vasilakis
* Move statistics related functions to analytics.c * error message change, space added after if * start an analytics thread * use heartbeat instead of sleep * add late enviroment (after rrdinit) pick of some attributes * change loop * re-enable info messages * remove possible new line * log and report hits on allmetrics pages. detect if exporting engines are enabled/in use, and report them * use lowercase for analytics variables * add collectors * add buildinfo * more attributes from late environment * add new attributes to v1/info * re-gather meta data before exit. update allmetrics counters to be available in v1/info * log hits to dashboard * add mirrored hosts * added notification methods * fix spaces, proper JSON naming * add alerts, charts and metrics count * more attributes * keep the thread up, and report a meta event every 2 hours * small formating changes. Disable analytics_log_prometheus when for unit testing. Add the new attributes to the anonymous-statistics.sh.in script * applied clang-format * dont gather data again on exit * safe buffer length in snprintfz * add rrdset lock * remove show_archived * remove setenv * calculate lengths during sets
2021-04-21Revert "Provide more agent analytics to posthog (#10887)" (#11011)Emmanuel Vasilakis
This reverts commit a1ce482f3e336dbabe1b12b92f6339af6a2bbbf8.
2021-04-21Provide more agent analytics to posthog (#10887)Emmanuel Vasilakis
* Move statistics related functions to analytics.c * error message change, space added after if * start an analytics thread * use heartbeat instead of sleep * add late enviroment (after rrdinit) pick of some attributes * change loop *