diff options
author | Costa Tsaousis <costa@netdata.cloud> | 2022-10-05 14:13:46 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2022-10-05 14:13:46 +0300 |
commit | 8fc3b351a2e7fc96eced8f924de2e9cec9842128 (patch) | |
tree | bde41c66573ccaf8876c280e00742cc6096b587c /web | |
parent | 6850878e697d66dc90b9af1e750b22238c63c292 (diff) |
Allow netdata plugins to expose functions for querying more information about specific charts (#13720)
* function renames and code cleanup in popen.c; no actual code changes
* netdata popen() now opens both child process stdin and stdout and returns FILE * for both
* pass both input and output to parser structures
* updated rrdset to call custom functions
* RRDSET FUNCTION leading calls for both sync and async operation
* put RRDSET functions to a separate file
* added format and timeout at function definition
* support for synchronous (internal plugins) and asynchronous (external plugins and children) functions
* /api/v1/function endpoint
* functions are now attached to the host and there is a dictionary view per chart
* functions implemented at plugins.d
* remove the defer until keyword hook from plugins.d when it is done
* stream sender implementation of functions
* sanitization of all functions so that certain characters are only allowed
* strictier sanitization
* common max size
* 1st working plugins.d example
* always init inflight dictionary
* properly destroy dictionaries to avoid parallel insertion of items
* add more debugging on disconnection reasons
* add more debugging on disconnection reasons again
* streaming receiver respects newlines
* dont use the same fp for both streaming receive and send
* dont free dbengine memory with internal checks
* make sender proceed in the buffer
* added timing info and garbage collection at plugins.d
* added info about routing nodes
* added info about routing nodes with delay
* added more info about delays
* added more info about delays again
* signal sending thread to wake up
* streaming version labeling and commented code to support capabilities
* added functions to /api/v1/data, /api/v1/charts, /api/v1/chart, /api/v1/info
* redirect top output to stdout
* address coverity findings
* fix resource leaks of popen
* log attempts to connect to individual destinations
* better messages
* properly parse destinations
* try to find a function from the most matching to the least matching
* log added streaming destinations
* rotate destinations bypassing a node in the middle that does not accept our connection
* break the loops properly
* use typedef to define callbacks
* capabilities negotiation during streaming
* functions exposed upstream based on capabilities; compression disabled per node persisting reconnects; always try to connect with all capabilities
* restore functionality to lookup functions
* better logging of capabilities
* remove old versions from capabilities when a newer version is there
* fix formatting
* optimization for plugins.d rrdlabels to avoid creating and destructing dictionaries all the time
* delayed health initialization for rrddim and rrdset
* cleanup health initialization
* fix for popen() not returning the right value
* add health worker jobs for initializing rrdset and rrddim
* added content type support for functions; apps.plugin permanent function to display all the processes
* fixes for functions parameters parsing in apps.plugin
* fix for process matching in apps.plugiin
* first working function for apps.plugin
* Dashboard ACL is disabled for functions; Function errors are all in JSON format
* apps.plugin function processes returns json table
* use json_escape_string() to escape message
* fix formatting
* apps.plugin exposes all its metrics to function processes
* fix json formatting when filtering out some rows
* reopen the internal pipe of rrdpush in case of errors
* misplaced statement
* do not use buffer->len
* support for GLOBAL functions (functions that are not linked to a chart
* added /api/v1/functions endpoint; removed format from the FUNCTIONS api;
* swagger documentation about the new api end points
* added plugins.d documentation about functions
* never re-close a file
* remove uncessesary ifdef
* fixed issues identified by codacy
* fix for null label value
* make edit-config copy-and-paste friendly
* Revert "make edit-config copy-and-paste friendly"
This reverts commit 54500c0e0a97f65a0c66c4d34e966f6a9056698e.
* reworked sender handshake to fix coverity findings
* timeout is zero, for both send_timeout() and recv_timeout()
* properly detect that parent closed the socket
* support caching of function responses; limit function response to 10MB; added protection from malformed function responses
* disabled excessive logging
* added units to apps.plugin function processes and normalized all values to be human readable
* shorter field names
* fixed issues reported
* fixed apps.plugin error response; tested that pluginsd can properly handle faulty responses
* use double linked list macros for double linked list management
* faster apps.plugin function printing by minimizing file operations
* added memory percentage
* fix compatibility issues with older compilers and FreeBSD
* rrdpush sender code cleanup; rrhost structure cleanup from sender flags and variables;
* fix letftover variable in ifdef
* apps.plugin: do not call detach from the thread; exit immediately when input is broken
* exclude AR charts from health
* flush cleaner; prefer sender output
* clarity
* do not fill the cbuffer if not connected
* fix
* dont enabled host->sender if streaming is not enabled; send host label updates to parent;
* functions are only available through ACLK
* Prepared statement reports only in dev mode
* fix AR chart detection
* fix for streaming not being enabling itself
* more cleanup of sender and receiver structures
* moved read-only flags and configuration options to rrdhost->options
* fixed merge with master
* fix for incomplete rename
* prevent service thread from working on charts that are being collected
Co-authored-by: Stelios Fragkakis <52996999+stelfrag@users.noreply.github.com>
Diffstat (limited to 'web')
-rw-r--r-- | web/api/formatters/json_wrapper.c | 19 | ||||
-rw-r--r-- | web/api/formatters/rrdset2json.c | 5 | ||||
-rw-r--r-- | web/api/netdata-swagger.json | 62 | ||||
-rw-r--r-- | web/api/netdata-swagger.yaml | 41 | ||||
-rw-r--r-- | web/api/web_api_v1.c | 109 | ||||
-rw-r--r-- | web/server/static/static-threaded.c | 4 | ||||
-rw-r--r-- | web/server/web_client.c | 8 | ||||
-rw-r--r-- | web/server/web_client.h | 1 | ||||
-rw-r--r-- | web/server/web_client_cache.c | 4 |
9 files changed, 220 insertions, 33 deletions
diff --git a/web/api/formatters/json_wrapper.c b/web/api/formatters/json_wrapper.c index 3ebe42c99f..811afa8921 100644 --- a/web/api/formatters/json_wrapper.c +++ b/web/api/formatters/json_wrapper.c @@ -175,6 +175,25 @@ void rrdr_json_wrapper_begin(RRDR *r, BUFFER *wb, uint32_t format, RRDR_OPTIONS buffer_strcat(wb, "],\n"); } + // functions + { + DICTIONARY *funcs = dictionary_create(DICT_OPTION_SINGLE_THREADED|DICT_OPTION_DONT_OVERWRITE_VALUE); + for (i = 0, rd = temp_rd ? temp_rd : r->st->dimensions; rd; rd = rd->next) { + chart_functions_to_dict(rd->rrdset, funcs); + } + + buffer_sprintf(wb, " %sfunctions%s: [", kq, kq); + void *t; (void)t; + dfe_start_read(funcs, t) { + const char *comma = ""; + if(t_dfe.counter) comma = ", "; + buffer_sprintf(wb, "%s%s%s%s", comma, sq, t_dfe.name, sq); + } + dfe_done(t); + dictionary_destroy(funcs); + buffer_strcat(wb, "],\n"); + } + // Composite charts if (context_mode && temp_rd) { buffer_sprintf( diff --git a/web/api/formatters/rrdset2json.c b/web/api/formatters/rrdset2json.c index 7758601e29..1e81063359 100644 --- a/web/api/formatters/rrdset2json.c +++ b/web/api/formatters/rrdset2json.c @@ -143,8 +143,11 @@ void rrdset2json(RRDSET *st, BUFFER *wb, size_t *dimensions_count, size_t *memor } buffer_strcat(wb, ",\n\t\t\t\"chart_labels\": {\n"); chart_labels2json(st, wb, 2); - buffer_strcat(wb, "\t\t\t}\n"); + buffer_strcat(wb, "\t\t\t}"); + buffer_strcat(wb, ",\n\t\t\t\"functions\": {\n"); + chart_functions2json(st, wb, 4, "\"", "\""); + buffer_strcat(wb, "\t\t\t}"); buffer_sprintf(wb, "\n\t\t}" diff --git a/web/api/netdata-swagger.json b/web/api/netdata-swagger.json index 029783b55b..7d9cd7b940 100644 --- a/web/api/netdata-swagger.json +++ b/web/api/netdata-swagger.json @@ -1629,6 +1629,68 @@ } } }, + "/function": { + "get": { + "summary": "Execute a collector function.", + "parameters": [ + { + "name": "function", + "in": "query", + "description": "The name of the function, as returned by the collector.", + "required": true, + "allowEmptyValue": false, + "schema": { + "type": "string" + } + }, + { + "name": "timeout", + "in": "query", + "description": "The timeout in seconds to wait for the function to complete.", + "required": false, + "schema": { + "type": "number", + "format": "integer", + "default": 10 + } + } + ], + "responses": { + "200": { + "description": "The collector function has been executed successfully. Each collector may return a different type of content." + }, + "400": { + "description": "The request was rejected by the collector." + }, + "404": { + "description": "The requested function is not found." + }, + "500": { + "description": "Other internal error, getting this error means there is a bug in Netdata." + }, + "503": { + "description": "The collector to execute the function is not currently available." + }, + "504": { + "description": "Timeout while waiting for the collector to execute the function." + }, + "591": { + "description": "The collector sent a response, but it was invalid or corrupted." + } + } + } + }, + "/functions": { + "get": { + "summary": "Get a list of all registered collector functions.", + "description": "Collector functions are programs that can be executed on demand.", + "responses": { + "200": { + "description": "A JSON object containing one object per supported function." + } + } + } + }, "/weights": { "get": { "summary": "Analyze all the metrics using an algorithm and score them accordingly", diff --git a/web/api/netdata-swagger.yaml b/web/api/netdata-swagger.yaml index 2e04e9f20f..586b456831 100644 --- a/web/api/netdata-swagger.yaml +++ b/web/api/netdata-swagger.yaml @@ -1351,6 +1351,47 @@ paths: that correlated the metrics did not produce any result. "504": description: Timeout - the query took too long and has been cancelled. + /function: + get: + summary: "Execute a collector function." + parameters: + - name: function + in: query + description: The name of the function, as returned by the collector. + required: true + allowEmptyValue: false + schema: + type: string + - name: timeout + in: query + description: The timeout in seconds to wait for the function to complete. + required: false + schema: + type: number + format: integer + default: 10 + responses: + "200": + description: The collector function has been executed successfully. Each collector may return a different type of content. + "400": + description: The request was rejected by the collector. + "404": + description: The requested function is not found. + "500": + description: Other internal error, getting this error means there is a bug in Netdata. + "503": + description: The collector to execute the function is not currently available. + "504": + description: Timeout while waiting for the collector to execute the function. + "591": + description: The collector sent a response, but it was invalid or corrupted. + /functions: + get: + summary: Get a list of all registered collector functions. + description: Collector functions are programs that can be executed on demand. + responses: + "200": + description: A JSON object containing one object per supported function. /weights: get: summary: "Analyze all the metrics using an algorithm and score them accordingly" diff --git a/web/api/web_api_v1.c b/web/api/web_api_v1.c index f261b8ee59..a7cf5d6223 100644 --- a/web/api/web_api_v1.c +++ b/web/api/web_api_v1.c @@ -1200,6 +1200,10 @@ inline int web_client_api_request_v1_info_fill_buffer(RRDHOST *host, BUFFER *wb) host_labels2json(host, wb, 2); buffer_strcat(wb, "\t},\n"); + buffer_strcat(wb, "\t\"functions\": {\n"); + host_functions2json(host, wb, 2, "\"", "\""); + buffer_strcat(wb, "\t},\n"); + buffer_strcat(wb, "\t\"collectors\": ["); chartcollectors2json(host, wb); buffer_strcat(wb, "\n\t],\n"); @@ -1250,7 +1254,7 @@ inline int web_client_api_request_v1_info_fill_buffer(RRDHOST *host, BUFFER *wb) #ifdef ENABLE_COMPRESSION if(host->sender){ buffer_strcat(wb, "\t\"stream-compression\": "); - buffer_strcat(wb, (host->sender->rrdpush_compression ? "true" : "false")); + buffer_strcat(wb, (host->sender->flags & SENDER_FLAG_COMPRESSION) ? "true" : "false"); buffer_strcat(wb, ",\n"); }else{ buffer_strcat(wb, "\t\"stream-compression\": null,\n"); @@ -1483,6 +1487,53 @@ int web_client_api_request_v1_weights(RRDHOST *host, struct web_client *w, char return web_client_api_request_v1_weights_internal(host, w, url, WEIGHTS_METHOD_ANOMALY_RATE, WEIGHTS_FORMAT_CONTEXTS); } +int web_client_api_request_v1_function(RRDHOST *host, struct web_client *w, char *url) { + if (!netdata_ready) + return HTTP_RESP_BACKEND_FETCH_FAILED; + + int timeout = 0; + const char *function = NULL; + + while (url) { + char *value = mystrsep(&url, "&"); + if (!value || !*value) + continue; + + char *name = mystrsep(&value, "="); + if (!name || !*name) + continue; + + if (!strcmp(name, "function")) + function = value; + + else if (!strcmp(name, "timeout")) + timeout = (int) strtoul(value, NULL, 0); + } + + BUFFER *wb = w->response.data; + buffer_flush(wb); + wb->contenttype = CT_APPLICATION_JSON; + buffer_no_cacheable(wb); + + return rrd_call_function_and_wait(host, wb, timeout, function); +} + +int web_client_api_request_v1_functions(RRDHOST *host, struct web_client *w, char *url __maybe_unused) { + if (!netdata_ready) + return HTTP_RESP_BACKEND_FETCH_FAILED; + + BUFFER *wb = w->response.data; + buffer_flush(wb); + wb->contenttype = CT_APPLICATION_JSON; + buffer_no_cacheable(wb); + + buffer_strcat(wb, "{\n"); + host_functions2json(host, wb, 1, "\"", "\""); + buffer_strcat(wb, "}"); + + return HTTP_RESP_OK; +} + #ifndef ENABLE_DBENGINE int web_client_api_request_v1_dbengine_stats(RRDHOST *host, struct web_client *w, char *url) { return HTTP_RESP_NOT_FOUND; @@ -1585,47 +1636,57 @@ int web_client_api_request_v1_dbengine_stats(RRDHOST *host __maybe_unused, struc } #endif +#ifdef NETDATA_DEV_MODE +#define ACL_DEV_OPEN_ACCESS WEB_CLIENT_ACL_DASHBOARD +#else +#define ACL_DEV_OPEN_ACCESS 0 +#endif + static struct api_command { const char *command; uint32_t hash; WEB_CLIENT_ACL acl; int (*callback)(RRDHOST *host, struct web_client *w, char *url); } api_commands[] = { - { "info", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_info }, - { "data", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_data }, - { "chart", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_chart }, - { "charts", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_charts }, - { "context", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_context }, - { "contexts", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_contexts }, - { "archivedcharts", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_archivedcharts }, + { "info", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_info }, + { "data", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_data }, + { "chart", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_chart }, + { "charts", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_charts }, + { "context", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_context }, + { "contexts", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_contexts }, + { "archivedcharts", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_archivedcharts }, // registry checks the ACL by itself, so we allow everything - { "registry", 0, WEB_CLIENT_ACL_NOCHECK, web_client_api_request_v1_registry }, + { "registry", 0, WEB_CLIENT_ACL_NOCHECK, web_client_api_request_v1_registry }, // badges can be fetched with both dashboard and badge permissions - { "badge.svg", 0, WEB_CLIENT_ACL_DASHBOARD|WEB_CLIENT_ACL_BADGE, web_client_api_request_v1_badge }, + { "badge.svg", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_BADGE | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_badge }, - { "alarms", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_alarms }, - { "alarms_values", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_alarms_values }, - { "alarm_log", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_alarm_log }, - { "alarm_variables", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_alarm_variables }, - { "alarm_count", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_alarm_count }, - { "allmetrics", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_allmetrics }, + { "alarms", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_alarms }, + { "alarms_values", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_alarms_values }, + { "alarm_log", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_alarm_log }, + { "alarm_variables", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_alarm_variables }, + { "alarm_count", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_alarm_count }, + { "allmetrics", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_allmetrics }, #if defined(ENABLE_ML) - { "ml_info", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_ml_info }, - { "ml_models", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_ml_models }, + { "anomaly_events", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_anomaly_events }, + { "anomaly_event_info", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_anomaly_event_info }, + { "ml_info", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_ml_info }, #endif - { "manage/health", 0, WEB_CLIENT_ACL_MGMT, web_client_api_request_v1_mgmt_health }, - { "aclk", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_aclk_state }, - { "metric_correlations", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_metric_correlations }, - { "weights", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_weights }, + { "manage/health", 0, WEB_CLIENT_ACL_MGMT | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_mgmt_health }, + { "aclk", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_aclk_state }, + { "metric_correlations", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_metric_correlations }, + { "weights", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_weights }, + + { "function", 0, WEB_CLIENT_ACL_ACLK | ACL_DEV_OPEN_ACCESS, web_client_api_request_v1_function }, + { "functions", 0, WEB_CLIENT_ACL_ACLK | ACL_DEV_OPEN_ACCESS, web_client_api_request_v1_functions }, - { "dbengine_stats", 0, WEB_CLIENT_ACL_DASHBOARD, web_client_api_request_v1_dbengine_stats }, + { "dbengine_stats", 0, WEB_CLIENT_ACL_DASHBOARD | WEB_CLIENT_ACL_ACLK, web_client_api_request_v1_dbengine_stats }, // terminator - { NULL, 0, WEB_CLIENT_ACL_NONE, NULL }, + { NULL, 0, WEB_CLIENT_ACL_NONE, NULL }, }; inline int web_client_api_request_v1(RRDHOST *host, struct web_client *w, char *url) { diff --git a/web/server/static/static-threaded.c b/web/server/static/static-threaded.c index 8b9122f87f..26e9a47bda 100644 --- a/web/server/static/static-threaded.c +++ b/web/server/static/static-threaded.c @@ -196,7 +196,7 @@ static void *web_server_add_callback(POLLINFO *pi, short int *events, void *data } #ifdef ENABLE_HTTPS - if ((!web_client_check_unix(w)) && ( netdata_srv_ctx )) { + if ((!web_client_check_unix(w)) && (netdata_ssl_srv_ctx)) { if( sock_delnonblock(w->ifd) < 0 ){ error("Web server cannot remove the non-blocking flag from socket %d",w->ifd); } @@ -218,7 +218,7 @@ static void *web_server_add_callback(POLLINFO *pi, short int *events, void *data //The next two ifs are not together because I am reusing SSL structure if (!w->ssl.conn) { - w->ssl.conn = SSL_new(netdata_srv_ctx); + w->ssl.conn = SSL_new(netdata_ssl_srv_ctx); if ( w->ssl.conn ) { SSL_set_accept_state(w->ssl.conn); } else { diff --git a/web/server/web_client.c b/web/server/web_client.c index 9865ce671b..ff485384f7 100644 --- a/web/server/web_client.c +++ b/web/server/web_client.c @@ -1018,7 +1018,7 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) { // TODO -- ideally we we should avoid copying buffers around snprintfz(w->last_url, NETDATA_WEB_REQUEST_URL_SIZE, "%s%s", w->decoded_url, w->decoded_query_string); #ifdef ENABLE_HTTPS - if ( (!web_client_check_unix(w)) && (netdata_srv_ctx) ) { + if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) { if ((w->ssl.conn) && ((w->ssl.flags & NETDATA_SSL_NO_HANDSHAKE) && (web_client_is_using_ssl_force(w) || web_client_is_using_ssl_default(w)) && (w->mode != WEB_CLIENT_MODE_STREAM)) ) { w->header_parse_tries = 0; w->header_parse_last_size = 0; @@ -1054,7 +1054,7 @@ static inline ssize_t web_client_send_data(struct web_client *w,const void *buf, { ssize_t bytes; #ifdef ENABLE_HTTPS - if ( (!web_client_check_unix(w)) && (netdata_srv_ctx) ) { + if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) { if ( ( w->ssl.conn ) && ( !w->ssl.flags ) ){ bytes = SSL_write(w->ssl.conn,buf, len) ; } else { @@ -1211,7 +1211,7 @@ static inline void web_client_send_http_header(struct web_client *w) { size_t count = 0; ssize_t bytes; #ifdef ENABLE_HTTPS - if ( (!web_client_check_unix(w)) && (netdata_srv_ctx) ) { + if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) { if ( ( w->ssl.conn ) && ( !w->ssl.flags ) ){ while((bytes = SSL_write(w->ssl.conn, buffer_tostring(w->response.header_output), buffer_strlen(w->response.header_output))) < 0) { count++; @@ -1915,7 +1915,7 @@ ssize_t web_client_receive(struct web_client *w) buffer_need_bytes(w->response.data, NETDATA_WEB_REQUEST_RECEIVE_SIZE); #ifdef ENABLE_HTTPS - if ( (!web_client_check_unix(w)) && (netdata_srv_ctx) ) { + if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) { if ( ( w->ssl.conn ) && (!w->ssl.flags)) { bytes = SSL_read(w->ssl.conn, &w->response.data->buffer[w->response.data->len], (size_t) (left - 1)); }else { diff --git a/web/server/web_client.h b/web/server/web_client.h index 50e08fb335..63a6f4b589 100644 --- a/web/server/web_client.h +++ b/web/server/web_client.h @@ -27,6 +27,7 @@ extern int web_enable_gzip, web_gzip_level, web_gzip_strategy; #define HTTP_RESP_INTERNAL_SERVER_ERROR 500 #define HTTP_RESP_BACKEND_FETCH_FAILED 503 #define HTTP_RESP_GATEWAY_TIMEOUT 504 +#define HTTP_RESP_BACKEND_RESPONSE_INVALID 591 extern int respect_web_browser_do_not_track_policy; extern char *web_x_frame_options; diff --git a/web/server/web_client_cache.c b/web/server/web_client_cache.c index 5d7865762d..f7eb527239 100644 --- a/web/server/web_client_cache.c +++ b/web/server/web_client_cache.c @@ -9,7 +9,7 @@ #ifdef ENABLE_HTTPS static void web_client_reuse_ssl(struct web_client *w) { - if (netdata_srv_ctx) { + if (netdata_ssl_srv_ctx) { if (w->ssl.conn) { SSL_clear(w->ssl.conn); } @@ -48,7 +48,7 @@ static void web_client_free(struct web_client *w) { buffer_free(w->response.data); freez(w->user_agent); #ifdef ENABLE_HTTPS - if ((!web_client_check_unix(w)) && ( netdata_srv_ctx )) { + if ((!web_client_check_unix(w)) && (netdata_ssl_srv_ctx)) { if (w->ssl.conn) { SSL_free(w->ssl.conn); w->ssl.conn = NULL; |