summaryrefslogtreecommitdiffstats
path: root/ruleset.conf.sample
diff options
context:
space:
mode:
authorMichael Weiser <michael.weiser@gmx.de>2019-05-07 18:58:21 +0000
committerMichael Weiser <michael.weiser@gmx.de>2019-05-09 08:14:56 +0000
commit0e066d44f63260881d2b72a05282e49e105c1791 (patch)
treea7f1abcd9696a1f4816af7572eda896ce0c80cfe /ruleset.conf.sample
parent91712f60b19ab2227601a33364946e0482b58c58 (diff)
More robustly poll Cuckoo REST API jobs
Downloading the full list of jobs gets less and less efficient as the number of current and past jobs increases. There is no way to filter down to specific job IDs. The limit and offset parameters of the list action of the API cannot be used to achieve a similar effect because the job list is not sorted by job/task ID and the parameters seem only meant to iterate over the whole list in blocks, not to extract specific jobs from it. The previous logic of determining the highest job ID at startup and requesting the next million entries from that offset on was therefore likely not working as expected and making us "blind" to status changes of jobs which end up below our offset in the job list. This change adjusts the CuckooAPI to make use of the list of running jobs we've had for some time now to work around this. Instead of getting a list of all jobs starting from the highest job id we saw at startup we just get each job's status individually. While this makes for more requests, it should over a longer runtime make for less network traffic and reliably get us the data we need about our jobs. Also, turn the shutdown_requested flag into an event so we can use its wait() method to also implement the poll interval and get immediate reaction to a shutdown request. Finally, we switch to endless retrying of failed job status requests paired with the individual request retry logic introduced earlier. On submission we still fail the submission process after timeouts or retries on the assumption that without the job being submitted to Cuckoo, our feedback to the client that analysis fail will cause it to resubmit and still avoid duplicates. Closes #43.
Diffstat (limited to 'ruleset.conf.sample')
0 files changed, 0 insertions, 0 deletions