summaryrefslogtreecommitdiffstats
path: root/src
AgeCommit message (Collapse)Author
2021-02-06Rewrite package organizational structure using DAGMatthias Beyer
This patch reimplements the package orchestration functionality to rely on a DAG rather than a tree. A / \ B E / \ \ C D F Before this change, the structure the packages were organized in for a build was a tree. That did work reasonable well for initial development of butido, because this is a simple case and the implementation is rather simple, too. But, packages and their dependencies are not always organized in a tree. Most of the time, they are organized in a DAG: .-> C -, / \ D > A \ / `-> B -´ This is a real-world example: A could be a common crypto-library that I do not want to name here. B and C could be libraries that use the said crypto-library and D could be a program that use B and C. Because said crypto-library builds rather long, building it twice and throwing one result away is a no-go. A DAG as organizational structure makes that issue go away entirely. Also, we can later implement checks whether the DAG contains multiple versions of the same library, if that is undesireable. The change itself is rather big, frankly because it is a non-trivial change the replace the whole data structure and its handling in the orchestrator code. First of all, we introduce the "daggy" library, which provides the DAG implementation on top of the popular "petgraph" library. The package `Tree` datastructure was replaced by a package `Dag` datastructure. This type implements the heavy-lifting that is needed to load a package and all its dependencies from the `Repository` object. The `JobTree` was also reimplemented, but as `daggy::Dag` provides a convenient `map()` function, its implementation which transforms the package `Dag` into a job `Dag` is rather trivial. `crate::job::Dag` then provides the convenience `iter()` function to iterate over all elements in the DAG and providing a `JobDefinition` object for each node. The topology in which we traverse the DAG is not an issue, as we need to create tasks for all `JobDefinition`s anyways, so we do not care about traversal topology at all. The `crate::package::Package` type got an `Hash` implementation, which is necessary to keep track of the mappings while reading the DAG from the repository. The implementation does not create the edges between the nodes in the DAG right when inserting, but afterwards. To keep track of the `daggy::NodeIndex`es, it keeps a mapping Package -> NodeIndex in a Hashmap. Thus, `Package` must implement `std::hash::Hash` Signed-off-by: Matthias Beyer <mail@beyermatthias.de> Tested-by: Matthias Beyer <mail@beyermatthias.de> squash! Reimplement as DAG
2021-02-05Remove "tree" from submitMatthias Beyer
This removes the "tree" column from the "submits" table. This is because we do not store the build-tree in the database anymore. We don't actually need this feature and we can always re-build the tree from an old commit in the repository. Thus, this is not required anymore. Also, it is less easy to do as soon as the internal implementation changes from a "tree" structure to a "DAG" structure. Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-02-05Optimize: Don't duplicate job UUIDMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-02-04Fix: Ensure job UUIDs are propagated through whole treeMatthias Beyer
This patch changes the propagation of results, so that the UUIDs of the jobs producing the artifacts are propagated through the whole tree. This issue at hand was that when having a dependency tree like this: C -> B -> A The results from A were propagated to B and the results from B where propagated to C. But, because the implementation did not do this, the results from A where included in the results from B and the UUID from A was dropped. This was an issue because the implementation waited for _all_ dependencies (direct and transitive) by their job UUID. This means that C waited on a UUID that described the Job for A, but never received it, which caused everything to fail. This patch changes the algorithm, to not only report the own UUID and all artifacts of a job, but all artifacts with their UUID attached, which solves the issue. The root of the tree (the `Orchestrator`) simply drops the UUIDs before returning the artifacts to its caller. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-03Be a bit more verbose in debug output hereMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-03Fix: Format UUIDs of missing job results to be human-readable before ↵Matthias Beyer
constructing error object Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: Hash decoding for sha256Matthias Beyer
This seems strange, but it works. I don't know whether this is right, tho. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add more error context informationMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add bytes written/total bytes to status bar messageMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: Download bar should be joined in blocking tokio taskMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Merge branch 'fix-progress'Matthias Beyer
2021-02-02Fix: Progress reportingMatthias Beyer
This patch fixes progress reporting. Because our progress-bar-creating helper initializes the bar with length 1, we have to set the length here manually. The bar has to be added to the multibar object right away, because otherwise it will be rendered to the output which gives us an ugly dead progress bar. If the length is set after the adding to the multibar object, this does not happen. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: Make sure that bar is moved to LogReceiver, drop it afterwardsMatthias Beyer
This patch makes the bar moved to the LogReceiver instead of borrowing it to it. Because ProgressBar::clone() is cheap (according to indicatif documentation it is just an Arc<> holding the actual object), we can do this here without fearing the overhead. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add network-mode setting for endpointsMatthias Beyer
This patch adds the ability to set network mode for an endpoint. This means that all containers on the endpoint are started with, for example, --net=host (if that is desired). Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add trace output which packages will be verifiedMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add tracing outputMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Add tracing outputMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Merge branch 'verification-async' into masterMatthias Beyer
2021-02-02Make source verification completely asyncMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: Receive artifacts _after_ checking whether jobs resulted in errorMatthias Beyer
This caused the program never to return if the running jobs resulted in an error and no artifact being sent to the parent - which caused the tokio::join!() to never return, thus the futures not to be pulled and thus the whole program sleep in an strange state which looked like some filesystem operations did not return. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: fn does not have to be asyncMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-02-02Fix: wait properly for multibar joinMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-30Update tokio: 0.2 -> 1.0, shipliftMatthias Beyer
Because tokio 1.0 does not ship with the Stream trait, this patch also introduces tokio_stream as new dependency. For more information, look here: https://docs.rs/tokio/1.0.3/tokio/stream/index.html Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-30itertools: 0.9 -> 0.10Matthias Beyer
We don't need resiter::Map here anymore because itertools 0.10 provides a map_ok() extension. Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-29Add documentation how Orchestrator worksMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Merge branch 'more-parallelism' into masterMatthias Beyer
2021-01-25Outsource receiving, ensure we received it allMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Make progress bar message format uniformMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Let the JobHandle::run() return a Vec<Artifact>Matthias Beyer
Before that change, it returned the dbmodels::Artifact objects, for which we needed to fetch the filestore::Artifact again. This change removes that restriction (improving runtime, of course). Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Reimplement Orchestrator::run()Matthias Beyer
This reimplements the Orchestrator::run() function _again_. Commit 889649ac16367fe671ce61363bb6ce82531e5a6b was the basis for this work, improving the baseline so we can take a step further in this commit. The approach before the change from 889649ac16367fe671ce61363bb6ce82531e5a6b had one flaw. In the following scenario: A / \ B E / \ \ C D F The nodes C, D and F are selected and then run. After they all succeeded, the next iteration is checked, and yields that B and E can be built. But if F takes extremely long, B and E both have to wait until it is ready (because that's how the implementation works), although B can be built as soon as C and D are ready. This patch changes the implementation to the following: 1. For each job, there is a task. 2. The task has a channel where it receives results from its dependencides. In above example, B would receive the results of the job runs for C and D, and E would receive the result from the job run of F. 3. The task also has a sender where it can send its resulting artifacts to a parent task. The task _also_ sends the results of its childs. This way we propagate the built artifacts up to the root node. All these tasks are started concurrently. The "root" task sends the result to the orchestrator. The task itself is responsible for sending the job to the scheduler and processing the result. If the job errored, the task sends that to its parent. If a child errored, the task aborts its own error and propagates that error. What does not yet work in this commit: * Artifacts that were built before the error occoured are not reported yet. * The staging/release stores may contain artifacts that can be re-used. They are completely ignored by now. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Do not deny missing docs, we havent written em yetMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Do not deny missing copy impls, we have to manyMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Do not deny missing debug impls, we have to manyMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Do not deny unused results, we have too manyMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Remove unnecessary qualificationMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Fix: Use rand as _ to make lint happyMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Deny more things, sort denylistMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-25Reimplement hash verification using streamingMatthias Beyer
This patch re-implements hashing using streams and buffered readers instead of reading a full file to RAM before hashing it. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Implement sha256/sha512 supportMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Fix: Filter each entry, strip prefixMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-25Refactor: Move package name regex building to helper functionMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-21Fix clippy: Remove noop drop() callMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-21Fix clippy: Do not clone() copy typeMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-21Remove trailing whitespaceMatthias Beyer
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
2021-01-21Remove RunnableJob::package_environment()Matthias Beyer
This functionality is not required anymore, as we put the whole package definition in the job script interpolation anyways. Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-21Reimplement: Orchestrator::run()Matthias Beyer
This patch reimplements the running of the computed jobs. The old implementation was structured as follows: 1. Compute a Tree of dependencies for the requested package 2. Make sets of this tree (see below) 3. For each set 3.1. Run set in parallel by submitting each job in the set to the scheduler 3.2. collect outputs and errors 3.3. Record outputs and return errors (if any) The complexity here was the computing of the JobSets but also the running of each job in a set in parallel. The code was non-trivial to understand. But that's not even the biggest concern with this approch. Consider the following tree of jobs: A / \ B E / \ \ C D F / \ G H \ I Each node here represents a package, the edges represent dependencies on the lower-hanging package. This tree would result in 5 sets of jobs: [ [ I ] [ G, H ] [ C, D, F ] [ B, E ] [ A ] ] because each "layer" in the tree would be run in parallel. It can be easily seen, that in the tree from above, the jobs for [ I, G, D, C ] can be run in parallel easily, because they do not have dependencies. The reimplementation also has another (crucial) benefit: The implementation does not depend on a structure of artifact path names anymore. Before, the artifacts needed to have a name as follows: <name of the package>-<version of the package>.<something> which was extremely restrictive. With the changes from this patch, the implementation does not depend on such a format anymore. Instead: Dependencies are associated with a job, by the output of jobs run for dependent packages. That means that, considering the above tree of packages: deps_of(B) = outputs_of(job_for(C)) + outputs_of(job_for(D)) in text: The dependencies of package B are the outputs of the job run for package C plus the outputs of the job run for package D. With that change in place, the outputs of a job run for a package can yield arbitrary file names and as long as the build script for the package can process them, everything is fine. The new algorithm, that solves that issue, is rather simple: 1. Hold a list of errors 2. Hold a list of artifacts that were built 3. Hold a list of jobs that were run 4. Iterate over all jobs, filtered by - If the job appears in the "already run jobs" list, ignore it - If a job has dependencies (on outputs of other jobs) that do not appear in the "already run jobs", ignore it (for now) 5. Run these jobs, and for each job: 5.1. Take the job UUID and put it in the "already run jobs" list. 5.2. Take the result of the job, 5.2.1. if it is an error, put it in the "list of errors" 5.2.2. if it is ok, put the artifact in the "list of artifacts" 6. if the list of errors is not empty, goto 9 7. if all jobs are in the "already run jobs" list, goto 9 8. goto 4 9. return all artifacts and all errors Because this approach is fundamentally different than the previous approach, a lot of things had to be rewritten: - The `JobSet` type was complete removed - There is a new type `crate::job:Tree` that gets built from the `crate::package::Tree` It is a mapping of a UUID (the job UUID) to a `JobDefinition`. The `JobDefinition` type is - A Job - A list of UUIDs of other jobs, where this job depends on the outputs It is therefore a mapping of `Job -> outputs(jobs_of(dependencies)` The `crate::job::Tree` type is now responsible for building a `Job` object for each `crate::package::Package` object from the `crate::package::Tree` object. Because the `crate::package::Tree` object contains all required packages for the complete built, the implementation of `crate::job::Tree::build_tree()` does not check sanity. It is assumed that the input tree to the function contains all mappings. Despite the name `crate::job::Tree` ("Tree"), the actual structure stored in the type is not a real tree. - The `MergedStores::get_artifact_by_path()` function was adapted because in the previous implementation, it used `StagingStore::load_from_path()`, which tried to load the file from the filesystem and put it into the internal map, which failed if it was already there. The adaption checks if the artifact already exists in the internal map and returns that object instead. (For the release store accordingly) - The interface of the `RunnableJob::build_from_job()` function was adapted, as this function does not need to access the `MergedStores` object anymore to load dependency-Artifacts from the filesystem. Instead, these Artifacts are passed to the function now. - The Orchestrator code - Got a type alias `JobResult` which represents the result of a job run wich is either - A number of artifacts (for optimization reasons with their associated database artifact entry) - or an error with the job uuid that failed (again, for optimization reasons) - Got an implementation of the algorithm described above - Got a new implementation of run_job(), which - Fetches the pathes of dependency-artifacts from the database by using the job uuids from the JobDefinition object - Creates the RunnableJob object for that - Schedules the RunnableJob object in the scheduler - For each output artifact (database object representing it) - get the filesystem Artifact object for it Signed-off-by: Matthias Beyer <matthias.beyer@atos.net> Tested-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-21Add derive(Debug) for FillArtifactPathDisplayMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-21Add MergedStores::get_artifact_by_path()Matthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-21Add FullArtifactPath::exists()Matthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
2021-01-21impl From<Artifact> for JobResourceMatthias Beyer
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>