summaryrefslogtreecommitdiffstats
path: root/integrations
diff options
context:
space:
mode:
authorNetdata bot <43409846+netdatabot@users.noreply.github.com>2024-05-27 14:18:55 -0400
committerGitHub <noreply@github.com>2024-05-27 21:18:55 +0300
commitcdcbe3a5fc540301a117cecfbc606c35870f4e13 (patch)
treef12a086a4e760c8650206c88b192584021728934 /integrations
parenta89905ad3acf80d8c2c441f9fff338e34b47695b (diff)
Regenerate integrations.js (#17761)
Co-authored-by: ilyam8 <22274335+ilyam8@users.noreply.github.com>
Diffstat (limited to 'integrations')
-rw-r--r--integrations/integrations.js4
-rw-r--r--integrations/integrations.json4
2 files changed, 4 insertions, 4 deletions
diff --git a/integrations/integrations.js b/integrations/integrations.js
index dd1320aaca..857f14f1d5 100644
--- a/integrations/integrations.js
+++ b/integrations/integrations.js
@@ -3434,8 +3434,8 @@ export const integrations = [
"overview": "# ClickHouse\n\nPlugin: go.d.plugin\nModule: clickhouse\n\n## Overview\n\nThis collector retrieves performance data from ClickHouse for connections, queries, resources, replication, IO, and data operations (inserts, selects, merges) using HTTP requests and ClickHouse system tables. It monitors your ClickHouse server's health and activity.\n\n\nIt sends HTTP requests to the ClickHouse [HTTP interface](https://clickhouse.com/docs/en/interfaces/http), executing SELECT queries to retrieve data from various system tables.\nSpecifically, it collects metrics from the following tables:\n\n- system.metrics\n- system.async_metrics\n- system.events\n- system.disks\n- system.parts\n- system.processes\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects ClickHouse instances running on localhost that are listening on port 8123.\nOn startup, it tries to collect metrics from:\n\n- http://127.0.0.1:8123\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
"setup": "## Setup\n\n### Prerequisites\n\nNo action required.\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/clickhouse.conf`.\n\n\nYou can edit the configuration file using the `edit-config` script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/clickhouse.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n{% details summary=\"Config options\" %}\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8123 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n{% /details %}\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n{% details summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n username: username\n password: password\n\n```\n{% /details %}\n##### HTTPS with self-signed certificate\n\nClickHouse with enabled HTTPS and self-signed certificate.\n\n{% details summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1:8123\n tls_skip_verify: yes\n\n```\n{% /details %}\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n{% details summary=\"Config\" %}\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n\n - name: remote\n url: http://192.0.2.1:8123\n\n```\n{% /details %}\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\nTo troubleshoot issues with the `clickhouse` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m clickhouse\n ```\n\n",
- "alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
- "metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per ClickHouse instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.connections | tcp, http, mysql, postgresql, interserver | connections |\n| clickhouse.slow_reads | slow | reads/s |\n| clickhouse.read_backoff | read_backoff | events/s |\n| clickhouse.memory_usage | used | bytes |\n| clickhouse.running_queries | running | queries |\n| clickhouse.queries_preempted | preempted | queries |\n| clickhouse.queries | successful, failed | queries/s |\n| clickhouse.select_queries | successful, failed | selects/s |\n| clickhouse.insert_queries | successful, failed | inserts/s |\n| clickhouse.queries_memory_limit_exceeded | mem_limit_exceeded | queries/s |\n| clickhouse.queries_latency | queries_time | microseconds |\n| clickhouse.select_queries_latency | selects_time | microseconds |\n| clickhouse.insert_queries_latency | inserts_time | microseconds |\n| clickhouse.io | reads, writes | bytes/s |\n| clickhouse.iops | reads, writes | ops/s |\n| clickhouse.io_errors | read, write | errors/s |\n| clickhouse.io_seeks | lseek | ops/s |\n| clickhouse.io_file_opens | file_open | ops/s |\n| clickhouse.replicated_parts_current_activity | fetch, send, check | parts |\n| clickhouse.replicas_max_absolute_dela | replication_delay | seconds |\n| clickhouse.replicated_readonly_tables | read_only | tables |\n| clickhouse.replicated_data_loss | data_loss | events |\n| clickhouse.replicated_part_fetches | successful, failed | fetches/s |\n| clickhouse.inserted_rows | inserted | rows/s |\n| clickhouse.inserted_bytes | inserted | bytes/s |\n| clickhouse.rejected_inserts | rejected | inserts/s |\n| clickhouse.delayed_inserts | delayed | inserts/s |\n| clickhouse.delayed_inserts_throttle_time | delayed_inserts_throttle_time | milliseconds |\n| clickhouse.selected_bytes | selected | bytes/s |\n| clickhouse.selected_rows | selected | rows/s |\n| clickhouse.selected_parts | selected | parts/s |\n| clickhouse.selected_ranges | selected | ranges/s |\n| clickhouse.selected_marks | selected | marks/s |\n| clickhouse.merges | merge | ops/s |\n| clickhouse.merges_latency | merges_time | milliseconds |\n| clickhouse.merged_uncompressed_bytes | merged_uncompressed | bytes/s |\n| clickhouse.merged_rows | merged | rows/s |\n| clickhouse.merge_tree_data_writer_inserted_rows | inserted | rows/s |\n| clickhouse.merge_tree_data_writer_uncompressed_bytes | inserted | bytes/s |\n| clickhouse.merge_tree_data_writer_compressed_bytes | written | bytes/s |\n| clickhouse.uncompressed_cache_requests | hits, misses | requests/s |\n| clickhouse.mark_cache_requests | hits, misses | requests/s |\n| clickhouse.max_part_count_for_partition | max_parts_partition | parts |\n| clickhouse.parts_count | temporary, pre_active, active, deleting, delete_on_destroy, outdated, wide, compact | parts |\n| distributed_connections | active | connections |\n| distributed_connections_attempts | connection | attempts/s |\n| distributed_connections_fail_retries | connection_retry | fails/s |\n| distributed_connections_fail_exhausted_retries | connection_retry_exhausted | fails/s |\n| distributed_files_to_insert | pending_insertions | files |\n| distributed_rejected_inserts | rejected | inserts/s |\n| distributed_delayed_inserts | delayed | inserts/s |\n| distributed_delayed_inserts_latency | delayed_time | milliseconds |\n| distributed_sync_insertion_timeout_exceeded | sync_insertion | timeouts/s |\n| distributed_async_insertions_failures | async_insertions | failures/s |\n| clickhouse.uptime | uptime | seconds |\n\n### Per disk\n\nThese metrics refer to the Disk.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| disk_name | Name of the disk as defined in the [server configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure). |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.disk_space_usage | free, used | bytes |\n\n### Per table\n\nThese metrics refer to the Database Table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | Name of the database. |\n| table | Name of the table. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.database_table_size | size | bytes |\n| clickhouse.database_table_parts | parts | parts |\n| clickhouse.database_table_rows | rows | rows |\n\n",
+ "alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ clickhouse_restarted ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.uptime | ClickHouse has recently been restarted |\n| [ clickhouse_queries_preempted ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.queries_preempted | ClickHouse has queries that are stopped and waiting due to priority setting |\n| [ clickhouse_long_running_query ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.longest_running_query_time | ClickHouse has a long-running query exceeding the threshold |\n| [ clickhouse_rejected_inserts ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.rejected_inserts | ClickHouse has INSERT queries that are rejected due to high number of active data parts for partition in a MergeTree |\n| [ clickhouse_delayed_inserts ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.delayed_inserts | ClickHouse has INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree |\n| [ clickhouse_replication_lag ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.replicas_max_absolute_delay | ClickHouse is experiencing replication lag greater than 5 minutes |\n| [ clickhouse_replicated_readonly_tables ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.replicated_readonly_tables | ClickHouse has replicated tables in readonly state due to ZooKeeper session loss/startup without ZooKeeper configured |\n| [ clickhouse_max_part_count_for_partition ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.max_part_count_for_partition | ClickHouse high number of parts per partition |\n| [ clickhouse_distributed_connections_failures ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.distributed_connections_fail_exhausted_retries | ClickHouse has failed distributed connections after exhausting all retry attempts |\n| [ clickhouse_distributed_files_to_insert ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.distributed_files_to_insert | ClickHouse high number of pending files to process for asynchronous insertion into Distributed tables |\n",
+ "metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per ClickHouse instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.connections | tcp, http, mysql, postgresql, interserver | connections |\n| clickhouse.slow_reads | slow | reads/s |\n| clickhouse.read_backoff | read_backoff | events/s |\n| clickhouse.memory_usage | used | bytes |\n| clickhouse.running_queries | running | queries |\n| clickhouse.queries_preempted | preempted | queries |\n| clickhouse.queries | successful, failed | queries/s |\n| clickhouse.select_queries | successful, failed | selects/s |\n| clickhouse.insert_queries | successful, failed | inserts/s |\n| clickhouse.queries_memory_limit_exceeded | mem_limit_exceeded | queries/s |\n| clickhouse.longest_running_query_time | longest_query_time | seconds |\n| clickhouse.queries_latency | queries_time | microseconds |\n| clickhouse.select_queries_latency | selects_time | microseconds |\n| clickhouse.insert_queries_latency | inserts_time | microseconds |\n| clickhouse.io | reads, writes | bytes/s |\n| clickhouse.iops | reads, writes | ops/s |\n| clickhouse.io_errors | read, write | errors/s |\n| clickhouse.io_seeks | lseek | ops/s |\n| clickhouse.io_file_opens | file_open | ops/s |\n| clickhouse.replicated_parts_current_activity | fetch, send, check | parts |\n| clickhouse.replicas_max_absolute_dela | replication_delay | seconds |\n| clickhouse.replicated_readonly_tables | read_only | tables |\n| clickhouse.replicated_data_loss | data_loss | events |\n| clickhouse.replicated_part_fetches | successful, failed | fetches/s |\n| clickhouse.inserted_rows | inserted | rows/s |\n| clickhouse.inserted_bytes | inserted | bytes/s |\n| clickhouse.rejected_inserts | rejected | inserts/s |\n| clickhouse.delayed_inserts | delayed | inserts/s |\n| clickhouse.delayed_inserts_throttle_time | delayed_inserts_throttle_time | milliseconds |\n| clickhouse.selected_bytes | selected | bytes/s |\n| clickhouse.selected_rows | selected | rows/s |\n| clickhouse.selected_parts | selected | parts/s |\n| clickhouse.selected_ranges | selected | ranges/s |\n| clickhouse.selected_marks | selected | marks/s |\n| clickhouse.merges | merge | ops/s |\n| clickhouse.merges_latency | merges_time | milliseconds |\n| clickhouse.merged_uncompressed_bytes | merged_uncompressed | bytes/s |\n| clickhouse.merged_rows | merged | rows/s |\n| clickhouse.merge_tree_data_writer_inserted_rows | inserted | rows/s |\n| clickhouse.merge_tree_data_writer_uncompressed_bytes | inserted | bytes/s |\n| clickhouse.merge_tree_data_writer_compressed_bytes | written | bytes/s |\n| clickhouse.uncompressed_cache_requests | hits, misses | requests/s |\n| clickhouse.mark_cache_requests | hits, misses | requests/s |\n| clickhouse.max_part_count_for_partition | max_parts_partition | parts |\n| clickhouse.parts_count | temporary, pre_active, active, deleting, delete_on_destroy, outdated, wide, compact | parts |\n| distributed_connections | active | connections |\n| distributed_connections_attempts | connection | attempts/s |\n| distributed_connections_fail_retries | connection_retry | fails/s |\n| distributed_connections_fail_exhausted_retries | connection_retry_exhausted | fails/s |\n| distributed_files_to_insert | pending_insertions | files |\n| distributed_rejected_inserts | rejected | inserts/s |\n| distributed_delayed_inserts | delayed | inserts/s |\n| distributed_delayed_inserts_latency | delayed_time | milliseconds |\n| distributed_sync_insertion_timeout_exceeded | sync_insertion | timeouts/s |\n| distributed_async_insertions_failures | async_insertions | failures/s |\n| clickhouse.uptime | uptime | seconds |\n\n### Per disk\n\nThese metrics refer to the Disk.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| disk_name | Name of the disk as defined in the [server configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure). |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.disk_space_usage | free, used | bytes |\n\n### Per table\n\nThese metrics refer to the Database Table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | Name of the database. |\n| table | Name of the table. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.database_table_size | size | bytes |\n| clickhouse.database_table_parts | parts | parts |\n| clickhouse.database_table_rows | rows | rows |\n\n",
"integration_type": "collector",
"id": "go.d.plugin-clickhouse-ClickHouse",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/go/collectors/go.d.plugin/modules/clickhouse/metadata.yaml",
diff --git a/integrations/integrations.json b/integrations/integrations.json
index 1609e20e26..5db4ea0c6c 100644
--- a/integrations/integrations.json
+++ b/integrations/integrations.json
@@ -3432,8 +3432,8 @@
"overview": "# ClickHouse\n\nPlugin: go.d.plugin\nModule: clickhouse\n\n## Overview\n\nThis collector retrieves performance data from ClickHouse for connections, queries, resources, replication, IO, and data operations (inserts, selects, merges) using HTTP requests and ClickHouse system tables. It monitors your ClickHouse server's health and activity.\n\n\nIt sends HTTP requests to the ClickHouse [HTTP interface](https://clickhouse.com/docs/en/interfaces/http), executing SELECT queries to retrieve data from various system tables.\nSpecifically, it collects metrics from the following tables:\n\n- system.metrics\n- system.async_metrics\n- system.events\n- system.disks\n- system.parts\n- system.processes\n\n\nThis collector is supported on all platforms.\n\nThis collector supports collecting metrics from multiple instances of this integration, including remote instances.\n\n\n### Default Behavior\n\n#### Auto-Detection\n\nBy default, it detects ClickHouse instances running on localhost that are listening on port 8123.\nOn startup, it tries to collect metrics from:\n\n- http://127.0.0.1:8123\n\n\n#### Limits\n\nThe default configuration for this integration does not impose any limits on data collection.\n\n#### Performance Impact\n\nThe default configuration for this integration is not expected to impose a significant performance impact on the system.\n",
"setup": "## Setup\n\n### Prerequisites\n\nNo action required.\n\n### Configuration\n\n#### File\n\nThe configuration file name for this integration is `go.d/clickhouse.conf`.\n\n\nYou can edit the configuration file using the `edit-config` script from the\nNetdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).\n\n```bash\ncd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata\nsudo ./edit-config go.d/clickhouse.conf\n```\n#### Options\n\nThe following options can be defined globally: update_every, autodetection_retry.\n\n\n| Name | Description | Default | Required |\n|:----|:-----------|:-------|:--------:|\n| update_every | Data collection frequency. | 1 | no |\n| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |\n| url | Server URL. | http://127.0.0.1:8123 | yes |\n| timeout | HTTP request timeout. | 1 | no |\n| username | Username for basic HTTP authentication. | | no |\n| password | Password for basic HTTP authentication. | | no |\n| proxy_url | Proxy URL. | | no |\n| proxy_username | Username for proxy basic HTTP authentication. | | no |\n| proxy_password | Password for proxy basic HTTP authentication. | | no |\n| method | HTTP request method. | GET | no |\n| body | HTTP request body. | | no |\n| headers | HTTP request headers. | | no |\n| not_follow_redirects | Redirect handling policy. Controls whether the client follows redirects. | no | no |\n| tls_skip_verify | Server certificate chain and hostname validation policy. Controls whether the client performs this check. | no | no |\n| tls_ca | Certification authority that the client uses when verifying the server's certificates. | | no |\n| tls_cert | Client TLS certificate. | | no |\n| tls_key | Client TLS key. | | no |\n\n#### Examples\n\n##### Basic\n\nA basic example configuration.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n\n```\n##### HTTP authentication\n\nBasic HTTP authentication.\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n username: username\n password: password\n\n```\n##### HTTPS with self-signed certificate\n\nClickHouse with enabled HTTPS and self-signed certificate.\n\n```yaml\njobs:\n - name: local\n url: https://127.0.0.1:8123\n tls_skip_verify: yes\n\n```\n##### Multi-instance\n\n> **Note**: When you define multiple jobs, their names must be unique.\n\nCollecting metrics from local and remote instances.\n\n\n```yaml\njobs:\n - name: local\n url: http://127.0.0.1:8123\n\n - name: remote\n url: http://192.0.2.1:8123\n\n```\n",
"troubleshooting": "## Troubleshooting\n\n### Debug Mode\n\nTo troubleshoot issues with the `clickhouse` collector, run the `go.d.plugin` with the debug option enabled. The output\nshould give you clues as to why the collector isn't working.\n\n- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on\n your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.\n\n ```bash\n cd /usr/libexec/netdata/plugins.d/\n ```\n\n- Switch to the `netdata` user.\n\n ```bash\n sudo -u netdata -s\n ```\n\n- Run the `go.d.plugin` to debug the collector:\n\n ```bash\n ./go.d.plugin -d -m clickhouse\n ```\n\n",
- "alerts": "## Alerts\n\nThere are no alerts configured by default for this integration.\n",
- "metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per ClickHouse instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.connections | tcp, http, mysql, postgresql, interserver | connections |\n| clickhouse.slow_reads | slow | reads/s |\n| clickhouse.read_backoff | read_backoff | events/s |\n| clickhouse.memory_usage | used | bytes |\n| clickhouse.running_queries | running | queries |\n| clickhouse.queries_preempted | preempted | queries |\n| clickhouse.queries | successful, failed | queries/s |\n| clickhouse.select_queries | successful, failed | selects/s |\n| clickhouse.insert_queries | successful, failed | inserts/s |\n| clickhouse.queries_memory_limit_exceeded | mem_limit_exceeded | queries/s |\n| clickhouse.queries_latency | queries_time | microseconds |\n| clickhouse.select_queries_latency | selects_time | microseconds |\n| clickhouse.insert_queries_latency | inserts_time | microseconds |\n| clickhouse.io | reads, writes | bytes/s |\n| clickhouse.iops | reads, writes | ops/s |\n| clickhouse.io_errors | read, write | errors/s |\n| clickhouse.io_seeks | lseek | ops/s |\n| clickhouse.io_file_opens | file_open | ops/s |\n| clickhouse.replicated_parts_current_activity | fetch, send, check | parts |\n| clickhouse.replicas_max_absolute_dela | replication_delay | seconds |\n| clickhouse.replicated_readonly_tables | read_only | tables |\n| clickhouse.replicated_data_loss | data_loss | events |\n| clickhouse.replicated_part_fetches | successful, failed | fetches/s |\n| clickhouse.inserted_rows | inserted | rows/s |\n| clickhouse.inserted_bytes | inserted | bytes/s |\n| clickhouse.rejected_inserts | rejected | inserts/s |\n| clickhouse.delayed_inserts | delayed | inserts/s |\n| clickhouse.delayed_inserts_throttle_time | delayed_inserts_throttle_time | milliseconds |\n| clickhouse.selected_bytes | selected | bytes/s |\n| clickhouse.selected_rows | selected | rows/s |\n| clickhouse.selected_parts | selected | parts/s |\n| clickhouse.selected_ranges | selected | ranges/s |\n| clickhouse.selected_marks | selected | marks/s |\n| clickhouse.merges | merge | ops/s |\n| clickhouse.merges_latency | merges_time | milliseconds |\n| clickhouse.merged_uncompressed_bytes | merged_uncompressed | bytes/s |\n| clickhouse.merged_rows | merged | rows/s |\n| clickhouse.merge_tree_data_writer_inserted_rows | inserted | rows/s |\n| clickhouse.merge_tree_data_writer_uncompressed_bytes | inserted | bytes/s |\n| clickhouse.merge_tree_data_writer_compressed_bytes | written | bytes/s |\n| clickhouse.uncompressed_cache_requests | hits, misses | requests/s |\n| clickhouse.mark_cache_requests | hits, misses | requests/s |\n| clickhouse.max_part_count_for_partition | max_parts_partition | parts |\n| clickhouse.parts_count | temporary, pre_active, active, deleting, delete_on_destroy, outdated, wide, compact | parts |\n| distributed_connections | active | connections |\n| distributed_connections_attempts | connection | attempts/s |\n| distributed_connections_fail_retries | connection_retry | fails/s |\n| distributed_connections_fail_exhausted_retries | connection_retry_exhausted | fails/s |\n| distributed_files_to_insert | pending_insertions | files |\n| distributed_rejected_inserts | rejected | inserts/s |\n| distributed_delayed_inserts | delayed | inserts/s |\n| distributed_delayed_inserts_latency | delayed_time | milliseconds |\n| distributed_sync_insertion_timeout_exceeded | sync_insertion | timeouts/s |\n| distributed_async_insertions_failures | async_insertions | failures/s |\n| clickhouse.uptime | uptime | seconds |\n\n### Per disk\n\nThese metrics refer to the Disk.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| disk_name | Name of the disk as defined in the [server configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure). |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.disk_space_usage | free, used | bytes |\n\n### Per table\n\nThese metrics refer to the Database Table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | Name of the database. |\n| table | Name of the table. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.database_table_size | size | bytes |\n| clickhouse.database_table_parts | parts | parts |\n| clickhouse.database_table_rows | rows | rows |\n\n",
+ "alerts": "## Alerts\n\n\nThe following alerts are available:\n\n| Alert name | On metric | Description |\n|:------------|:----------|:------------|\n| [ clickhouse_restarted ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.uptime | ClickHouse has recently been restarted |\n| [ clickhouse_queries_preempted ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.queries_preempted | ClickHouse has queries that are stopped and waiting due to priority setting |\n| [ clickhouse_long_running_query ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.longest_running_query_time | ClickHouse has a long-running query exceeding the threshold |\n| [ clickhouse_rejected_inserts ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.rejected_inserts | ClickHouse has INSERT queries that are rejected due to high number of active data parts for partition in a MergeTree |\n| [ clickhouse_delayed_inserts ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.delayed_inserts | ClickHouse has INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree |\n| [ clickhouse_replication_lag ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.replicas_max_absolute_delay | ClickHouse is experiencing replication lag greater than 5 minutes |\n| [ clickhouse_replicated_readonly_tables ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.replicated_readonly_tables | ClickHouse has replicated tables in readonly state due to ZooKeeper session loss/startup without ZooKeeper configured |\n| [ clickhouse_max_part_count_for_partition ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.max_part_count_for_partition | ClickHouse high number of parts per partition |\n| [ clickhouse_distributed_connections_failures ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.distributed_connections_fail_exhausted_retries | ClickHouse has failed distributed connections after exhausting all retry attempts |\n| [ clickhouse_distributed_files_to_insert ](https://github.com/netdata/netdata/blob/master/src/health/health.d/clickhouse.conf) | clickhouse.distributed_files_to_insert | ClickHouse high number of pending files to process for asynchronous insertion into Distributed tables |\n",
+ "metrics": "## Metrics\n\nMetrics grouped by *scope*.\n\nThe scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.\n\n\n\n### Per ClickHouse instance\n\nThese metrics refer to the entire monitored application.\n\nThis scope has no labels.\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.connections | tcp, http, mysql, postgresql, interserver | connections |\n| clickhouse.slow_reads | slow | reads/s |\n| clickhouse.read_backoff | read_backoff | events/s |\n| clickhouse.memory_usage | used | bytes |\n| clickhouse.running_queries | running | queries |\n| clickhouse.queries_preempted | preempted | queries |\n| clickhouse.queries | successful, failed | queries/s |\n| clickhouse.select_queries | successful, failed | selects/s |\n| clickhouse.insert_queries | successful, failed | inserts/s |\n| clickhouse.queries_memory_limit_exceeded | mem_limit_exceeded | queries/s |\n| clickhouse.longest_running_query_time | longest_query_time | seconds |\n| clickhouse.queries_latency | queries_time | microseconds |\n| clickhouse.select_queries_latency | selects_time | microseconds |\n| clickhouse.insert_queries_latency | inserts_time | microseconds |\n| clickhouse.io | reads, writes | bytes/s |\n| clickhouse.iops | reads, writes | ops/s |\n| clickhouse.io_errors | read, write | errors/s |\n| clickhouse.io_seeks | lseek | ops/s |\n| clickhouse.io_file_opens | file_open | ops/s |\n| clickhouse.replicated_parts_current_activity | fetch, send, check | parts |\n| clickhouse.replicas_max_absolute_dela | replication_delay | seconds |\n| clickhouse.replicated_readonly_tables | read_only | tables |\n| clickhouse.replicated_data_loss | data_loss | events |\n| clickhouse.replicated_part_fetches | successful, failed | fetches/s |\n| clickhouse.inserted_rows | inserted | rows/s |\n| clickhouse.inserted_bytes | inserted | bytes/s |\n| clickhouse.rejected_inserts | rejected | inserts/s |\n| clickhouse.delayed_inserts | delayed | inserts/s |\n| clickhouse.delayed_inserts_throttle_time | delayed_inserts_throttle_time | milliseconds |\n| clickhouse.selected_bytes | selected | bytes/s |\n| clickhouse.selected_rows | selected | rows/s |\n| clickhouse.selected_parts | selected | parts/s |\n| clickhouse.selected_ranges | selected | ranges/s |\n| clickhouse.selected_marks | selected | marks/s |\n| clickhouse.merges | merge | ops/s |\n| clickhouse.merges_latency | merges_time | milliseconds |\n| clickhouse.merged_uncompressed_bytes | merged_uncompressed | bytes/s |\n| clickhouse.merged_rows | merged | rows/s |\n| clickhouse.merge_tree_data_writer_inserted_rows | inserted | rows/s |\n| clickhouse.merge_tree_data_writer_uncompressed_bytes | inserted | bytes/s |\n| clickhouse.merge_tree_data_writer_compressed_bytes | written | bytes/s |\n| clickhouse.uncompressed_cache_requests | hits, misses | requests/s |\n| clickhouse.mark_cache_requests | hits, misses | requests/s |\n| clickhouse.max_part_count_for_partition | max_parts_partition | parts |\n| clickhouse.parts_count | temporary, pre_active, active, deleting, delete_on_destroy, outdated, wide, compact | parts |\n| distributed_connections | active | connections |\n| distributed_connections_attempts | connection | attempts/s |\n| distributed_connections_fail_retries | connection_retry | fails/s |\n| distributed_connections_fail_exhausted_retries | connection_retry_exhausted | fails/s |\n| distributed_files_to_insert | pending_insertions | files |\n| distributed_rejected_inserts | rejected | inserts/s |\n| distributed_delayed_inserts | delayed | inserts/s |\n| distributed_delayed_inserts_latency | delayed_time | milliseconds |\n| distributed_sync_insertion_timeout_exceeded | sync_insertion | timeouts/s |\n| distributed_async_insertions_failures | async_insertions | failures/s |\n| clickhouse.uptime | uptime | seconds |\n\n### Per disk\n\nThese metrics refer to the Disk.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| disk_name | Name of the disk as defined in the [server configuration](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-multiple-volumes_configure). |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.disk_space_usage | free, used | bytes |\n\n### Per table\n\nThese metrics refer to the Database Table.\n\nLabels:\n\n| Label | Description |\n|:-----------|:----------------|\n| database | Name of the database. |\n| table | Name of the table. |\n\nMetrics:\n\n| Metric | Dimensions | Unit |\n|:------|:----------|:----|\n| clickhouse.database_table_size | size | bytes |\n| clickhouse.database_table_parts | parts | parts |\n| clickhouse.database_table_rows | rows | rows |\n\n",
"integration_type": "collector",
"id": "go.d.plugin-clickhouse-ClickHouse",
"edit_link": "https://github.com/netdata/netdata/blob/master/src/go/collectors/go.d.plugin/modules/clickhouse/metadata.yaml",