Age | Commit message (Collapse) | Author |
|
* modularized all external plugins
* added README.md in plugins
* fixed title
* fixed typo
* relative link to external plugins
* external plugins configuration README
* added plugins link
* remove plugins link
* plugin names are links
* added links to external plugins
* removed unecessary spacing
* list to table
* added language
* fixed typo
* list to table on internal plugins
* added more documentation to internal plugins
* moved python, node, and bash code and configs into the external plugins
* added statsd README
* fix bug with corrupting config.h every 2nd compilation
* moved all config files together with their code
* more documentation
* diskspace info
* fixed broken links in apps.plugin
* added backends docs
* updated plugins readme
* move nc-backend.sh to backends
* created daemon directory
* moved all code outside src/
* fixed readme identation
* renamed plugins.d.plugin to plugins.d
* updated readme
* removed linux- from linux plugins
* updated readme
* updated readme
* updated readme
* updated readme
* updated readme
* updated readme
* fixed README.md links
* fixed netdata tree links
* updated codacy, codeclimate and lgtm excluded paths
* update CMakeLists.txt
* updated automake options at top directory
* libnetdata slit into directories
* updated READMEs
* updated READMEs
* updated ARL docs
* updated ARL docs
* moved /plugins to /collectors
* moved all external plugins outside plugins.d
* updated codacy, codeclimate, lgtm
* updated README
* updated url
* updated readme
* updated readme
* updated readme
* updated readme
* moved api and web into webserver
* web/api web/gui web/server
* modularized webserver
* removed web/gui/version.txt
|
|
|
|
* There is no "cached" dimension on FreeBSD, use "cache" instead
* Account "buffers" as used memory. As buffers by default don't shrink
much under pressure.
* Account "inactive" as free memory. As pages from inactive list could be
cleared and become free as soon as somebody request more memory from
kernel.
* Sign CLA
|
|
* source is . in /bin/sh
* code cleanup and documentation
|
|
* Add a python plugin for monitoring power supplies on Linux.
This adds a python-based module for tracking statistics relating to
Linux kernel power_supply class devices. This allows tracking battery
statistics on Linux systems, as well as (in theory) other energy storage
devices that utilize the kernel's power_supply class.
The primary purpose of this module is twofold:
- To provide a way for battery powered IoT devices to easily alert about a
low battery.
- To provide a way for all battery powered devices to alert on some easy
to monitor battery health conditions.
It provides up to four charts, one which provides the remaining capacity
as a percentage, and three others which report info about charge (in
amp-hours), energy (in watt-hours), and voltage, each providing info
about the current values, and possibly minimal and maximal values that
can be used for computing battery life.
Exact support provided by each individual device varies. Almost all
provide the percentage capacity, but beyond that they may or may not
support any or all of the attributes needed for the other three charts
(ACPI compliant systems for example support most of the charge related
ones, and two of the voltage related values, but none of the energy
related ones).
Data collection is done by scanning entries in /sys/class/power_supply.
One job must be created for each power supply to be monitored, and there
is no autodetection (though the config includes an example that should
work to monitor the main battery on most laptops).
* Fix the build.
* Fix one bug and various style issues.
* Add a check to make sure it only runs on Linux.
* Fixed formatting issues reported by flake8.
* Updated to only collect capacity by default.
* Add an alarm to alert on low battery.
* Update function names to not be sunder style.
* Split chart generation to a separate function.
* Remove get_sysfs_value_or_zero.
|
|
* log flood should not be disabled; #4312
* use 10x the logs with the minimum of 10000
|
|
IPv6 literals in URLs must be enclosed in square brackets
|
|
IPv6 literals in URLs must be enclosed in square brackets
|
|
|
|
|
|
* makefiles install configs in /usr/lib/netdata/conf.d; #4182
* stock health config in /usr/lib/netdata/conf.d/health.d
* unit test path concatenation
* simplified health file management
* use stream.conf from stock config if it does not exist in /etc/netdata
* indicate loading of user config in function call
* load netdata.conf from stock dir if not found in /etc/netdata
* added NETDATA_USER_CONFIG_DIR
* provide defaults before loading config
* charts.d uses stock files
* fping now uses the stock config files
* tc-qos-helper.sh now uses stock configs
* cgroup-name.sh now uses stock configs too
* simplified cgroup-name.sh for user and stock config
* alarm-notify.sh uses stock configs too
* simplified fping plugin configs loading
* simplified tc-qos-helper.sh configs loading
* added error handling to charts.d.plugin
* apps.plugin used stock configs
* generalized recursive double-directory configs loading
* statsd uses stock configs
* node.d.plugin uses stock configs
* compile-time decision of netdata default paths for all files
* makeself cleans up old stock config files from user configuration directories
* fixed makeself typo
* netdata-installer.sh removes stock files from user configuration directories
* python.d.plugin user/stock configs update
* cleanup stock config files from /etc/netdata, only once
* python.d.plugin log loaded files
* fix permissions of stock config files and provide an "orig" link for quick access
* create help link on stock configs migration for static installations
* create user config directories
* example statsd synthetic charts now state they are examples
* updated configs.signatures
* spec file
* fixes in spec file
* fix typo
* install netdata after cleaning up stock configs from /etc/netdata
* python.d: add cpuidle stock conf
|
|
* tcp syn and accept queue charts and alarms; fixes #3234
* tcp syn and accept queue converted to auto
* updated configs.signatures
* enable 1m_ipv4_tcp_accept_queue_drops alarm
* /proc/net/netstat refers to the whole networking stack
* updated configs.signatures
|
|
* Make method in url service configurable
#4127
* Add documentation for http method
* wake up WIP
|
|
|
|
|
|
* replaced referenced to firehol github org with netdata github org
* increased versions of js files
* added new docker hub badge netdata/netdata and restored firehol/netdata
|
|
* Disable Pin API
Disabling pin api due to ipfs performance issues with larger object Storage.
Propable fix for #3156 while ipfs has not solved this issue (go-ipfs issue #3874)
* Disable IPFS Pin API (#1)
* Disable IPFS pin api by default
Disabling pin api by default due to ipfs performance issues with larger object Storage.
Propable fix for #3156 while ipfs has not solved this issue (go-ipfs issue #3874)
* Disable IPFS pin api by default
Disabling pin api by default due to ipfs performance issues with larger object Storage.
Propable fix for #3156 while ipfs has not solved this issue (go-ipfs issue #3874)
* Added Bug Link
* changed pinapi Type to bool
* changed pinapi Type to bool
* changed pinapi Type to bool
* load pinapi value correctly from conf file
* move pinapi setting to correct location
* modified bool style
|
|
|
|
* pyhton.d.plugin: respect update_every in jobs
* pyhton.d.plugin: run gc.collect every 300 secs in main thread
* pyhton.d.plugin: gc collect work around comment
* python.d.plugin: do not run all modules debug
* python.d.plugin: make gc.collect runs optional, add options to python.d.conf
|
|
* send host variables to prometheus; fixes #4035
* make sure labels do not start with comma
* embed the custom variables scopes into the structures; expose chart variables and the host level
* check the type of variable when streaming host variables
* added URL option variables=yes to enable sending host variables to prometheus; code cleanup to backends
* removed _host_var from variables
* added flags to tag alarm variables; set the last updated time of alarm variables
|
|
* Added graphs in web_log to track the virtual hosts and the port number (http vs https)
* Corrections following 1st PR
* Fixed contributors
* Removed initialization if port_80 & port_443
|
|
* rethinkdb python module init version
* python readme: add rethinkdb
|
|
* updated configs.signatures
* fix for load average alarms; fixes #4175
* updated configs.signatures
|
|
* Add alarms for abnormally high load averages.
This adds reasonably conservative alarms to send alarts on abnormally
high load averages. Such a situation may be indicative of a DoS attack,
runaway processes, or simply use of underpowered hardware.
This intentionally does not compute averages, as doing so would be
redundant (we are dealing with load _averages_ after all), which makes
the lookup lines look a bit odd in comparison to most other alarms.
The actual alarm calculation is as-follows:
* Compute the baseline trigger threshold. This is either 2 or the
maximum number of CPU's that were present in the system over the last
minute, whichever is higher. This special-cases single-CPU systems to
be a bit less aggressive,a s they are more often over-committed than
systems with multiple cores.
* For the 15 minute load average, if the maximum value over the last
minute is greater than twice the trigger threshold, issue a warning.
* For the 5 minute load average, if the maximum value over the last
minute is greater than four times the ttrigger value, issue a warning.
* For the 1 minute load average, if the maximum value over the last
minute is greater than eight times the trigger value, issue a warning.
* For all the load averages, if the value is greater than twice the
warning requirement, issue a critical alert.
* Down-hysteriesis is provided so that each alarm only resets wheen the
value goes below 7/8 of the value for that alarm status.
* Each alarm is evaluated once per minute.
This behavior should be suitable for most server type systems and many
workstations, but may be a bit overaggressive for certain types of system
(build systems for example).
* Fixed calculations of the base trigger value.
Credit goes to @ktsaou for pointing out how the original implementation
was incorrect.
* Update alarms with correct OS information.
|
|
Add node to apps_groups
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Add support for float values for time_multiplier in web_log
|
|
Add Amazon SNS notification support.
|
|
|
|
|
|
The Simple Notification Service (SNS) is a reasonably simple message
broker service provided by Amazon as part of it's AWS offerings.
SNS utilizes the concept of 'topics' (similar to Netdata's concept of
'roles' for notifications) to control message routing. Any given topic
may have any number of subscribers of any of the following types:
* Email addresses.
* Phone numbers for SMS.
* HTTP or HTTPS web hooks.
* AWS Lambda endpoints.
* AWS SQS endpoints.
* Mobile applications (via various native push notification services).
Topics are rpersented by Amazon Resource Names (ARN), which are a special
type of URI.
Unfortunately, properly signing requests to AWS endpoints is a serious
pain in the arse, so we pretty much have to use the CLI interface.
Because of the inflexibility of this tool, setup is somewhat painful.
THis uses topic ARN's directly as recipients. SNS does not support
delivery to multiple topics in bulk, so a separate call is required for
each topic ARN. Users are provided with the ability to customize the
message format used for the notifications.
|
|
|
|
https://github.com/tioumen/netdata into ms_team_notification_support
Merging with @ktsaou one
|
|
|
|
python megacli plugin
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
in alarm-notify.sh
|