summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAndrew Moss <1043609+amoss@users.noreply.github.com>2020-05-11 08:34:29 +0200
committerJames Mills <prologic@shortcircuit.net.au>2020-05-11 16:37:27 +1000
commitaa3ec552c896aebafd03b9d2c1864272dcb34749 (patch)
tree02f7cd95ed84d888c27fb4bfb55df2b251b97b7b
parentfd05e1d87751ecaa45ebd3aed2499435b1627cea (diff)
Enable support for Netdata Cloud.
This PR merges the feature-branch to make the cloud live. It contains the following work: Co-authored-by: Andrew Moss <1043609+amoss@users.noreply.github.com(opens in new tab)> Co-authored-by: Jacek Kolasa <jacek.kolasa@gmail.com(opens in new tab)> Co-authored-by: Austin S. Hemmelgarn <austin@netdata.cloud(opens in new tab)> Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)> Co-authored-by: Markos Fountoulakis <44345837+mfundul@users.noreply.github.com(opens in new tab)> Co-authored-by: Timotej S <6674623+underhood@users.noreply.github.com(opens in new tab)> Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com(opens in new tab)> * dashboard with new navbars, v1.0-alpha.9: PR #8478 * dashboard v1.0.11: netdata/dashboard#76 Co-authored-by: Jacek Kolasa <jacek.kolasa@gmail.com(opens in new tab)> * Added installer code to bundle JSON-c if it's not present. PR #8836 Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)> * Fix claiming config PR #8843 * Adds JSON-c as hard dep. for ACLK PR #8838 * Fix SSL renegotiation errors in old versions of openssl. PR #8840. Also - we have a transient problem with opensuse CI so this PR disables them with a commit from @prologic. Co-authored-by: James Mills <prologic@shortcircuit.net.au(opens in new tab)> * Fix claiming error handling PR #8850 * Added CI to verify JSON-C bundling code in installer PR #8853 * Make cloud-enabled flag in web/api/v1/info be independent of ACLK build success PR #8866 * Reduce ACLK_STABLE_TIMEOUT from 10 to 3 seconds PR #8871 * remove old-cloud related UI from old dashboard (accessible now via /old suffix) PR #8858 * dashboard v1.0.13 PR #8870 * dashboard v1.0.14 PR #8904 * Provide feedback on proxy setting changes PR #8895 * Change the name of the connect message to update during an ongoing session PR #8927 * Fetch active alarms from alarm_log PR #8944
-rw-r--r--.github/dockerfiles/Dockerfile.build_test1
-rw-r--r--.github/workflows/build-and-install.yml41
-rw-r--r--Makefile.am4
-rw-r--r--aclk/aclk_lws_wss_client.c46
-rw-r--r--aclk/aclk_lws_wss_client.h1
-rw-r--r--aclk/agent_cloud_link.c93
-rw-r--r--aclk/agent_cloud_link.h2
-rw-r--r--aclk/mqtt.c5
-rw-r--r--build/subst.inc2
-rw-r--r--build_external/projects/aclk-testing/agent-compose.yml2
-rw-r--r--build_external/projects/aclk-testing/agent-valgrind-compose.yml2
-rw-r--r--claim/README.md30
-rw-r--r--claim/claim.c66
-rw-r--r--claim/claim.h2
-rwxr-xr-xclaim/netdata-claim.sh.in200
-rw-r--r--configure.ac52
-rw-r--r--daemon/commands.c99
-rw-r--r--daemon/commands.h2
-rw-r--r--daemon/daemon.c2
-rw-r--r--daemon/main.c119
-rw-r--r--health/health_json.c25
-rwxr-xr-xhealth/notifications/alarm-notify.sh.in2
-rw-r--r--libnetdata/config/appconfig.c19
-rw-r--r--libnetdata/libnetdata.h2
-rwxr-xr-xnetdata-installer.sh72
-rw-r--r--packaging/dashboard.checksums2
-rw-r--r--packaging/dashboard.version2
-rw-r--r--packaging/docker/Dockerfile4
-rw-r--r--packaging/jsonc.checksums1
-rw-r--r--packaging/jsonc.version1
-rw-r--r--registry/registry.c14
-rw-r--r--registry/registry.h3
-rw-r--r--registry/registry_init.c5
-rw-r--r--tests/alarm_repetition/netdata.conf_with_repetition2
-rw-r--r--tests/alarm_repetition/netdata.conf_without_repetition2
-rw-r--r--web/api/web_api_v1.c7
-rw-r--r--web/gui/main.js50
-rw-r--r--web/gui/old/index.html3
38 files changed, 703 insertions, 284 deletions
diff --git a/.github/dockerfiles/Dockerfile.build_test b/.github/dockerfiles/Dockerfile.build_test
index 1dc3e303d6..5e6de6d603 100644
--- a/.github/dockerfiles/Dockerfile.build_test
+++ b/.github/dockerfiles/Dockerfile.build_test
@@ -7,5 +7,6 @@ ENV PRE=${PRE}
COPY . /netdata
+RUN chmod +x /netdata/rmjsonc.sh
RUN /bin/sh /netdata/prep-cmd.sh
RUN /netdata/packaging/installer/install-required-packages.sh --dont-wait --non-interactive netdata-all
diff --git a/.github/workflows/build-and-install.yml b/.github/workflows/build-and-install.yml
index cb1494332b..9a2b71e8cf 100644
--- a/.github/workflows/build-and-install.yml
+++ b/.github/workflows/build-and-install.yml
@@ -9,6 +9,7 @@ jobs:
build:
name: Build & Install
strategy:
+ fail-fast: false
matrix:
distro:
- 'alpine:edge'
@@ -35,30 +36,59 @@ jobs:
include:
- distro: 'alpine:edge'
pre: 'apk add -U bash'
+ rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.11'
pre: 'apk add -U bash'
+ rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.10'
pre: 'apk add -U bash'
+ rmjsonc: 'apk del json-c-dev'
- distro: 'alpine:3.9'
pre: 'apk add -U bash'
+ rmjsonc: 'apk del json-c-dev'
- distro: 'archlinux:latest'
pre: 'pacman --noconfirm -Sy grep libffi'
+ - distro: 'centos:8'
+ rmjsonc: 'dnf remove -y json-c-devel'
+
- distro: 'debian:bullseye'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'debian:buster'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'debian:stretch'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
+
+ - distro: 'fedora:32'
+ rmjsonc: 'dnf remove -y json-c-devel'
+ - distro: 'fedora:31'
+ rmjsonc: 'dnf remove -y json-c-devel'
+ - distro: 'fedora:30'
+ rmjsonc: 'dnf remove -y json-c-devel'
+
+ - distro: 'opensuse/leap:15.2'
+ rmjsonc: 'zypper rm -y libjson-c-devel'
+ - distro: 'opensuse/leap:15.1'
+ rmjsonc: 'zypper rm -y libjson-c-devel'
+ - distro: 'opensuse/tumbleweed:latest'
+ rmjsonc: 'zypper rm -y libjson-c-devel'
+
- distro: 'ubuntu:20.04'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:19.10'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:18.04'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
- distro: 'ubuntu:16.04'
pre: 'apt-get update'
+ rmjsonc: 'apt-get remove -y libjson-c-dev'
runs-on: ubuntu-latest
steps:
- name: Git clone repository
@@ -66,15 +96,22 @@ jobs:
- name: install-required-packages.sh on ${{ matrix.distro }}
env:
PRE: ${{ matrix.pre }}
+ RMJSONC: ${{ matrix.rmjsonc }}
run: |
echo $PRE > ./prep-cmd.sh
+ echo $RMJSONC > ./rmjsonc.sh
docker build . -f .github/dockerfiles/Dockerfile.build_test -t test --build-arg BASE=${{ matrix.distro }}
- name: Regular build on ${{ matrix.distro }}
run: |
docker run -w /netdata test /bin/sh -c 'autoreconf -ivf && ./configure && make -j2'
- - name: netdata-installer on ${{ matrix.distro }}
+ - name: netdata-installer on ${{ matrix.distro }}, disable cloud
run: |
docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --disable-cloud'
- - name: netdata-installer on ${{ matrix.distro }}
+ - name: netdata-installer on ${{ matrix.distro }}, require cloud
run: |
docker run -w /netdata test /bin/sh -c './netdata-installer.sh --dont-wait --dont-start-it --require-cloud'
+ - name: netdata-installer on ${{ matrix.distro }}, require cloud, no JSON-C
+ if: matrix.rmjsonc != ''
+ run: |
+ docker run -w /netdata test \
+ /bin/sh -c '/netdata/rmjsonc.sh && ./netdata-installer.sh --dont-wait --dont-start-it --require-cloud'
diff --git a/Makefile.am b/Makefile.am
index 461c7fbe3c..a768c18590 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -624,6 +624,10 @@ NETDATA_COMMON_LIBS = \
$(OPTIONAL_EBPF_LIBS) \
$(NULL)
+if LINK_STATIC_JSONC
+ NETDATA_COMMON_LIBS += externaldeps/jsonc/libjson-c.a
+endif
+
NETDATACLI_FILES = \
daemon/commands.h \
$(LIBNETDATA_FILES) \
diff --git a/aclk/aclk_lws_wss_client.c b/aclk/aclk_lws_wss_client.c
index 168d866b32..97aa337390 100644
--- a/aclk/aclk_lws_wss_client.c
+++ b/aclk/aclk_lws_wss_client.c
@@ -152,7 +152,6 @@ static void aclk_lws_wss_log_divert(int level, const char *line)
static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
{
static int lws_logging_initialized = 0;
- struct lws_context_creation_info info;
if (unlikely(!lws_logging_initialized)) {
lws_set_log_level(LLL_ERR | LLL_WARN, aclk_lws_wss_log_divert);
@@ -167,14 +166,6 @@ static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
engine_instance->host = target_hostname;
engine_instance->port = target_port;
- memset(&info, 0, sizeof(struct lws_context_creation_info));
- info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT;
- info.port = CONTEXT_PORT_NO_LISTEN;
- info.protocols = protocols;
-
- engine_instance->lws_context = lws_create_context(&info);
- if (!engine_instance->lws_context)
- goto failure_cleanup_2;
aclk_lws_mutex_init(&engine_instance->write_buf_mutex);
aclk_lws_mutex_init(&engine_instance->read_buf_mutex);
@@ -186,18 +177,27 @@ static int aclk_lws_wss_client_init( char *target_hostname, int target_port)
return 0;
failure_cleanup:
- lws_context_destroy(engine_instance->lws_context);
-failure_cleanup_2:
freez(engine_instance);
return 1;
}
-void aclk_lws_wss_client_destroy()
+void aclk_lws_wss_destroy_context()
{
- if (engine_instance == NULL)
+ if (!engine_instance)
+ return;
+ if (!engine_instance->lws_context)
return;
lws_context_destroy(engine_instance->lws_context);
engine_instance->lws_context = NULL;
+}
+
+
+void aclk_lws_wss_client_destroy()
+{
+ if (engine_instance == NULL)
+ return;
+
+ aclk_lws_wss_destroy_context();
engine_instance->lws_wsi = NULL;
aclk_lws_wss_clear_io_buffers(engine_instance);
@@ -267,7 +267,25 @@ int aclk_lws_wss_connect(char *host, int port)
int n;
if (!engine_instance) {
- return aclk_lws_wss_client_init(host, port);
+ if (aclk_lws_wss_client_init(host, port))
+ return 1; // Propagate failure
+ }
+
+ if (!engine_instance->lws_context)
+ {
+ // First time through (on this connection), create the context
+ struct lws_context_creation_info info;
+ memset(&info, 0, sizeof(struct lws_context_creation_info));
+ info.options = LWS_SERVER_OPTION_DO_SSL_GLOBAL_INIT;
+ info.port = CONTEXT_PORT_NO_LISTEN;
+ info.protocols = protocols;
+ engine_instance->lws_context = lws_create_context(&info);
+ if (!engine_instance->lws_context)
+ {
+ error("Failed to create lws_context, ACLK will not function");
+ return 1;
+ }
+ return 0;
// PROTOCOL_INIT callback will call again.
}
diff --git a/aclk/aclk_lws_wss_client.h b/aclk/aclk_lws_wss_client.h
index 26a7865393..584a3cf4f0 100644
--- a/aclk/aclk_lws_wss_client.h
+++ b/aclk/aclk_lws_wss_client.h
@@ -70,6 +70,7 @@ struct aclk_lws_wss_engine_instance {
};
void aclk_lws_wss_client_destroy();
+void aclk_lws_wss_destroy_context();
int aclk_lws_wss_connect(char *host, int port);
diff --git a/aclk/agent_cloud_link.c b/aclk/agent_cloud_link.c
index d3bf881a9c..4750967987 100644
--- a/aclk/agent_cloud_link.c
+++ b/aclk/agent_cloud_link.c
@@ -23,6 +23,7 @@ static char *aclk_password = NULL;
static char *global_base_topic = NULL;
static int aclk_connecting = 0;
int aclk_connected = 0; // Exposed in the web-api
+int aclk_force_reconnect = 0; // Indication from lower layers
usec_t aclk_session_us = 0; // Used by the mqtt layer
time_t aclk_session_sec = 0; // Used by the mqtt layer
@@ -47,7 +48,7 @@ pthread_mutex_t query_lock_wait = PTHREAD_MUTEX_INITIALIZER;
#define QUERY_THREAD_WAKEUP pthread_cond_signal(&query_cond_wait)
void lws_wss_check_queues(size_t *write_len, size_t *write_len_bytes, size_t *read_len);
-
+void aclk_lws_wss_destroy_context();
/*
* Maintain a list of collectors and chart count
* If all the charts of a collector are deleted
@@ -149,7 +150,7 @@ static RSA *aclk_private_key = NULL;
static int create_private_key()
{
char filename[FILENAME_MAX + 1];
- snprintfz(filename, FILENAME_MAX, "%s/claim.d/private.pem", netdata_configured_user_config_dir);
+ snprintfz(filename, FILENAME_MAX, "%s/cloud.d/private.pem", netdata_configured_varlib_dir);
long bytes_read;
char *private_key = read_by_filename(filename, &bytes_read);
@@ -1336,59 +1337,84 @@ void *aclk_main(void *ptr)
struct netdata_static_thread *static_thread = (struct netdata_static_thread *)ptr;
struct netdata_static_thread *query_thread;
- if (!netdata_cloud_setting) {
- info("Killing ACLK thread -> cloud functionality has been disabled");
- static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
- return NULL;
- }
+ // This thread is unusual in that it cannot be cancelled by cancel_main_threads()
+ // as it must notify the far end that it shutdown gracefully and avoid the LWT.
+ netdata_thread_disable_cancelability();
+
+#if defined( DISABLE_CLOUD ) || !defined( ENABLE_ACLK)
+ info("Killing ACLK thread -> cloud functionality has been disabled");
+ static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
+ return NULL;
+#endif
info("Waiting for netdata to be ready");
while (!netdata_ready) {
sleep_usec(USEC_PER_MS * 300);
}
+ info("Waiting for Cloud to be enabled");
+ while (!netdata_cloud_setting) {
+ sleep_usec(USEC_PER_SEC * 1);
+ if (netdata_exit) {
+ static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
+ return NULL;
+ }
+ }
+
last_init_sequence = now_realtime_sec();
query_thread = NULL;
char *aclk_hostname = NULL; // Initializers are over-written but prevent gcc complaining about clobbering.
char *aclk_port = NULL;
uint32_t port_num = 0;
- char *cloud_base_url = config_get(CONFIG_SECTION_CLOUD, "cloud base url", DEFAULT_CLOUD_BASE_URL);
- if (aclk_decode_base_url(cloud_base_url, &aclk_hostname, &aclk_port)) {
- error("Configuration error - cannot use agent cloud link");
- static_thread->enabled = NETDATA_MAIN_THREAD_EXITED;
- return NULL;
- }
- port_num = atoi(aclk_port); // SSL library uses the string, MQTT uses the numeric value
-
info("Waiting for netdata to be claimed");
while(1) {
while (likely(!is_agent_claimed())) {
- sleep_usec(USEC_PER_SEC * 5);
+ sleep_usec(USEC_PER_SEC * 1);
if (netdata_exit)
goto exited;
}
- if (!create_private_key() && !_mqtt_lib_init())
- break;
-
- if (netdata_exit)
+ // The NULL return means the value was never initialised, but this value has been initialized in post_conf_load.
+ // We trap the impossible NULL here to keep the linter happy without using a fatal() in the code.
+ char *cloud_base_url = appconfig_get(&cloud_config, CONFIG_SECTION_GLOBAL, "cloud base url", NULL);
+ if (cloud_base_url == NULL) {
+ error("Do not move the cloud base url out of post_conf_load!!");
goto exited;
+ }
+ if (aclk_decode_base_url(cloud_base_url, &aclk_hostname, &aclk_port)) {
+ error("Agent is claimed but the configuration is invalid, please fix");
+ }
+ else
+ {
+ port_num = atoi(aclk_port); // SSL library uses the string, MQTT uses the numeric value
+ if (!create_private_key() && !_mqtt_lib_init())
+ break;
+ }
- sleep_usec(USEC_PER_SEC * 60);
+ for (int i=0; i<60; i++) {
+ if (netdata_exit)
+ goto exited;
+
+ sleep_usec(USEC_PER_SEC * 1);
+ }
}
+
create_publish_base_topic();
usec_t reconnect_expiry = 0; // In usecs
- netdata_thread_disable_cancelability();
-
while (!netdata_exit) {
static int first_init = 0;
size_t write_q, write_q_bytes, read_q;
lws_wss_check_queues(&write_q, &write_q_bytes, &read_q);
+
+ if (aclk_force_reconnect) {
+ aclk_lws_wss_destroy_context();
+ aclk_force_reconnect = 0;
+ }
//info("loop state first_init_%d connected=%d connecting=%d wq=%zu (%zu-bytes) rq=%zu",
// first_init, aclk_connected, aclk_connecting, write_q, write_q_bytes, read_q);
- if (unlikely(!netdata_exit && !aclk_connected)) {
+ if (unlikely(!netdata_exit && !aclk_connected && !aclk_force_reconnect)) {
if (unlikely(!first_init)) {
aclk_try_to_connect(aclk_hostname, aclk_port, port_num);
first_init = 1;
@@ -1414,7 +1440,7 @@ void *aclk_main(void *ptr)
}
_link_event_loop();
- if (unlikely(!aclk_connected))
+ if (unlikely(!aclk_connected || aclk_force_reconnect))
continue;
/*static int stress_counter = 0;
if (write_q_bytes==0 && stress_counter ++ >5)
@@ -1550,6 +1576,7 @@ void aclk_disconnect()
waiting_init = 1;
aclk_connected = 0;
aclk_connecting = 0;
+ aclk_force_reconnect = 1;
}
void aclk_shutdown()
@@ -1598,6 +1625,7 @@ inline void aclk_create_header(BUFFER *dest, char *type, char *msg_id, time_t ts
* alarm_log
* active alarms
*/
+void health_active_log_alarms_2json(RRDHOST *host, BUFFER *wb);
void aclk_send_alarm_metadata()
{
BUFFER *local_buffer = buffer_create(NETDATA_WEB_RESPONSE_INITIAL_SIZE);
@@ -1618,17 +1646,18 @@ void aclk_send_alarm_metadata()
aclk_create_header(local_buffer, "connect_alarms", msg_id, aclk_session_sec, aclk_session_us);
buffer_strcat(local_buffer, ",\n\t\"payload\": ");
+
buffer_sprintf(local_buffer, "{\n\t \"configured-alarms\" : ");
health_alarms2json(localhost, local_buffer, 1);
debug(D_ACLK, "Metadata %s with configured alarms has %zu bytes", msg_id, local_buffer->len);
+ // buffer_sprintf(local_buffer, ",\n\t \"alarm-log\" : ");
+ // health_alarm_log2json(localhost, local_buffer, 0);
+ // debug(D_ACLK, "Metadata %s with alarm_log has %zu bytes", msg_id, local_buffer->len);
+ buffer_sprintf(local_buffer, ",\n\t \"alarms-active\" : ");
+ health_active_log_alarms_2json(localhost, local_buffer);
+ //debug(D_ACLK, "Metadata message %s", local_buffer->buffer);
- buffer_sprintf(local_buffer, ",\n\t \"alarm-log\" : ");
- health_alarm_log2json(localhost, local_buffer, 0);
- debug(D_ACLK, "Metadata %s with alarm_log has %zu bytes", msg_id, local_buffer->len);
- buffer_sprintf(local_buffer, ",\n\t \"alarms-active\" : ");
- health_alarms_values2json(localhost, local_buffer, 0);
- debug(D_ACLK, "Metadata %s with alarms_active has %zu bytes", msg_id, local_buffer->len);
buffer_sprintf(local_buffer, "\n}\n}");
aclk_send_message(ACLK_ALARMS_TOPIC, local_buffer->buffer, msg_id);
@@ -1657,7 +1686,7 @@ int aclk_send_info_metadata()
// a fake on_connect message then use the real timestamp to indicate it is within the existing
// session.
if (aclk_metadata_submitted == ACLK_METADATA_SENT)
- aclk_create_header(local_buffer, "connect", msg_id, 0, 0);
+ aclk_create_header(local_buffer, "update", msg_id, 0, 0);
else
aclk_create_header(local_buffer, "connect", msg_id, aclk_session_sec, aclk_session_us);
buffer_strcat(local_buffer, ",\n\t\"payload\": ");
diff --git a/aclk/agent_cloud_link.h b/aclk/agent_cloud_link.h
index a3722b82ae..29871cc89d 100644
--- a/aclk/agent_cloud_link.h
+++ b/aclk/agent_cloud_link.h
@@ -25,7 +25,7 @@
#define ACLK_MAX_TOPIC 255
#define ACLK_RECONNECT_DELAY 1 // reconnect delay -- with backoff stragegy fow now
-#define ACLK_STABLE_TIMEOUT 10 // Minimum delay to mark AGENT as stable
+#define ACLK_STABLE_TIMEOUT 3 // Minimum delay to mark AGENT as stable
#define ACLK_DEFAULT_PORT 9002
#define ACLK_DEFAULT_HOST "localhost"
diff --git a/aclk/mqtt.c b/aclk/mqtt.c
index b070f7fb09..8beb4b6766 100644
--- a/aclk/mqtt.c
+++ b/aclk/mqtt.c
@@ -29,7 +29,7 @@ void publish_callback(struct mosquitto *mosq, void *obj, int rc)
UNUSED(mosq);
UNUSED(obj);
UNUSED(rc);
-
+ info("Publish_callback: mid=%d", rc);
// TODO: link this with a msg_id so it can be traced
return;
}
@@ -219,7 +219,8 @@ void aclk_lws_connection_data_received()
void aclk_lws_connection_closed()
{
- aclk_disconnect(NULL);
+ aclk_disconnect();
+
}
diff --git a/build/subst.inc b/build/subst.inc
index c705fcbad3..2ec1116030 100644
--- a/build/subst.inc
+++ b/build/subst.inc
@@ -9,6 +9,8 @@
-e 's#[@]registrydir_POST@#$(registrydir)#g' \
-e 's#[@]varlibdir_POST@#$(varlibdir)#g' \
-e 's#[@]webdir_POST@#$(webdir)#g' \
+ -e 's#[@]can_enable_aclk_POST@#$(can_enable_aclk)#g' \
+ -e 's#[@]enable_cloud_POST@#$(enable_cloud)#g' \
$< > $@.tmp; then \
mv "$@.tmp" "$@"; \
else \
diff --git a/build_external/projects/aclk-testing/agent-compose.yml b/build_external/projects/aclk-testing/agent-compose.yml
index 265ff34a9f..04c357c433 100644
--- a/build_external/projects/aclk-testing/agent-compose.yml
+++ b/build_external/projects/aclk-testing/agent-compose.yml
@@ -9,7 +9,7 @@ services:
- VERSION=current
image: arch_current_dev:latest
command: >
- sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id &&
+ sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/var/lib/netdata/cloud.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf &&
diff --git a/build_external/projects/aclk-testing/agent-valgrind-compose.yml b/build_external/projects/aclk-testing/agent-valgrind-compose.yml
index dcb373babf..cf38893b30 100644
--- a/build_external/projects/aclk-testing/agent-valgrind-compose.yml
+++ b/build_external/projects/aclk-testing/agent-valgrind-compose.yml
@@ -9,7 +9,7 @@ services:
- VERSION=extras
image: arch_extras_dev:latest
command: >
- sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/etc/netdata/claim.d/claimed_id &&
+ sh -c "echo -n 00000000-0000-0000-0000-000000000000 >/var/lib/netdata/cloud.d/claimed_id &&
echo '[agent_cloud_link]' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link hostname = vernemq' >>/etc/netdata/netdata.conf &&
echo ' agent cloud link port = 9002' >>/etc/netdata/netdata.conf &&
diff --git a/claim/README.md b/claim/README.md
index 651a9b515f..2743851e33 100644
--- a/claim/README.md
+++ b/claim/README.md
@@ -96,7 +96,7 @@ docker run -d --name=netdata \
--cap-add SYS_PTRACE \
--security-opt apparmor=unconfined \
netdata/netdata \
- /usr/sbin/netdata -D -W set global "netdata cloud" enable -W set cloud "cloud base url" "https://app.netdata.cloud" -W "claim -token=TOKEN -rooms=ROOM1,ROOM2 -url=https://app.netdata.cloud"
+ /usr/sbin/netdata -D -W set cloud global enabled true -W set cloud global "cloud base url" "https://app.netdata.cloud" -W "claim -token=TOKEN -rooms=ROOM1,ROOM2 -url=https://app.netdata.cloud"
```
The container runs in detached mode, so you won't see any output. If the node does not appear in your Space, you can run
@@ -167,11 +167,11 @@ Use these keys and the information below to troubleshoot the ACLK.
If `cloud-enabled` is `false`, you probably ran the installer with `--disable-cloud` option.
-Additionally, check that the `netdata cloud` setting in `netdata.conf` is set to `enable`:
+Additionally, check that the `enabled` setting in `var/lib/netdata/cloud.d/cloud.conf` is set to `true`:
```ini
-[general]
- netadata cloud = enable
+[global]
+ enabled = true
```
To fix this issue, reinstall Netdata using your [preferred method](/packaging/installer/README.md) and do not add the
@@ -234,23 +234,23 @@ with details about your system and relevant output from `error.log`.
### Unclaim (remove) an Agent from Netdata Cloud
-The best method to remove an Agent from Netdata Cloud is to unclaim it by deleting the `claim.d/` directory in your
-Netdata configuration directory.
+The best method to remove an Agent from Netdata Cloud is to unclaim it by deleting the `cloud.d/` directory in your
+Netdata library directory.
```bash
-cd /etc/netdata # Replace with your Netdata configuration directory, if not /etc/netdata/
-rm -rf claim.d/
+cd /var/lib/netdata # Replace with your Netdata library directory, if not /var/lib/netdata/
+rm -rf cloud.d/
```
> You may need to use `sudo` or another method of elevating your privileges.
-Once you delete the `claim.d/` directory, the ACLK will not connect to Cloud the next time the Agent starts, and Cloud
+Once you delete the `cloud.d/` directory, the ACLK will not connect to Cloud the next time the Agent starts, and Cloud
will then remove it from the interface.
## Claiming reference
In the sections below, you can find reference material for the claiming script, claiming via the Agent's command line
-tool, and details about the files found in `claim.d`.
+tool, and details about the files found in `cloud.d`.
### Claiming script
@@ -263,7 +263,7 @@ and passing the following arguments:
-rooms=ROOM1,ROOM2,...
where ROOMX is the War Room this node should be added to. This list is optional.
-url=URL_BASE
- where URL_BASE is the Netdata Cloud endpoint base URL. By default, this is https://netdata.cloud.
+ where URL_BASE is the Netdata Cloud endpoint base URL. By default, this is https://app.netdata.cloud.
-id=AGENT_ID
where AGENT_ID is the unique identifier of the Agent. This is the Agent's MACHINE_GUID by default.
-hostname=HOSTNAME
@@ -306,14 +306,14 @@ If need be, the user can override the Agent's defaults by providing additional a
### Claiming directory
-Netdata stores the agent claiming-related state in the user configuration directory under `claim.d`, e.g. in
-`/etc/netdata/claim.d`. The user can put files in this directory to provide defaults to the `-token` and `-rooms`
+Netdata stores the agent claiming-related state in the Netdata library directory under `cloud.d`, e.g. in
+`/var/lib/netdata/cloud.d`. The user can put files in this directory to provide defaults to the `-token` and `-rooms`
arguments. These files should be owned **by the `netdata` user**.
-The `claim.d/token` file should contain the claiming-token and the `claim.d/rooms` file should contain the list of
+The `cloud.d/token` file should contain the claiming-token and the `cloud.d/rooms` file should contain the list of
war-rooms.
-The user can also put the Cloud endpoint's full certificate chain in `claim.d/cloud_fullchain.pem` so that the Agent
+The user can also put the Cloud endpoint's full certificate chain in `cloud.d/cloud_fullchain.pem` so that the Agent
can trust the endpoint if necessary.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fclaim%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>)
diff --git a/claim/claim.c b/claim/claim.c
index 7c729988e8..af6ec41f76 100644
--- a/claim/claim.c
+++ b/claim/claim.c
@@ -12,17 +12,19 @@ static char *claiming_errors[] = {
"Problems with claiming working directory", // 2