Age | Commit message (Collapse) | Author |
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch adds a helper trait to display maps from UUID -> Error.
It is just introduced for convenience and less code duplication, the speed of
execution is not relevant at all in this case, as if this code is executed,
we're already handling errors and aborting the execution anyways.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
After this refactoring, the number of dependencies is calculated _before_ the
hashmaps for the artifacts/errors are allocated, so that we allocate
approximately the right amount of memory.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
|
|
If there are no "other tasks", an error couldn't have happened on another task.
Thus, adapt the error message properly.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch fixes a bug where butido did not rebuild a package if one of its
dependencies was rebuild.
Of course, if in a dependency chain with libA -> libB, if libB gets rebuild, we
need to rebuild libA as well.
For this, a new (Wrapper-)type `ProducedArtifact` was added (private to the
orchestrator module) that holds that information.
This information is only necessary between the individual build workers. If we
know from all dependencies that the artifacts were reused, we can check for a
similar job in the database and reuse artifacts from that job as well. If one
dependency was built, we need to rebuild the current package as well.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This fixes the git commit hash and author information passing.
The changes in 2d72cbed2495517dba84ec4d46e5f521ff46412b did not add the
environment variables to the `find_artifact()` call when searching for
replacement artifacts.
Thus, calling the same build twice, with the staging directory from the first
call added in the second call, resulted in a full rebuild, although the
artifacts from the first call could be resused.
With this patch applied, this issue is fixed by adding the environment variables
to the environment passed to the `find_artifacts()` call.
Fixes: 2d72cbed2495517dba84ec4d46e5f521ff46412b ("Add feature to pass git author and git commit information to container")
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
0.16.1 has been published which fixes the bug introduced by 0.16.0, hence update
this dependency.
This reverts commit ddbd9629b3188c9f08d023829683c40ab9e1448b.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This reverts the dependency update because indicatif 0.16 introduced a
reachable `unreachable!()` statement in their sourcecode that we
actually hit if we `--hide-bars`.
This reverts commit 6ceccb679d9c2d19389c6c6eef792d8db9086f31.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
|
|
Because the interfaces of indicatif have changed, this PR changes a lot of
calls in the codebase. (Yay, moving all the things!)
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch fixes the implementation of the perform_receive() helper function to
return Ok(false) if the channel was closed (recv() returned None) and there are
already errors in the error buffer.
This case wasn't taken care of before, which resulted in the inconveniance that
the channel was closed but if a task still performed a receive, it got a `None`,
checked its dependencies and noted that there were dependencies missing.
This resulted in a "Childs finished, but dependencies missing" error being
printed to the user.
This is not the case, though.
This fix changes the implementation so that the `perform_receive()` function
simply returns a `Ok(false)` if the channel was closed and there were errors.
The call to the function was hence moved _after_ the check whether errors were
received (in the caller), so that these errors are propagated appropriately.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch implements the feature to be able to pass author and commit hash
information from the repository to the container.
This can be used to set packager or package description commit hash inside the
build container, if desired.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
This patch adds the ability to have more than one release store.
With this patch, a user can (has to) configure release store names in the
configuration file, and can then specify one of the configured names to release
the artifacts to.
This way, different release "channels" can be served, for example a stable
channel and a rolling release channel (although "channel" is not in our wording).
The code was adapted to be able to fetch releases from multiple release
directories, in the crate::db::find_artifact implementation, so that re-using
artifacts works across all release directories.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
The concept of the MergedStores type was okay in the beginning, but it got more
and more complex to use properly and most of the time, we used the
release/staging stores directly anyways.
So this removes the MergedStores type, which is a preparation for the change to
have multiple release stores.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch changes the code so that the MergedStores object is known in the
endpoints and job execution code.
This is necessary, because the jobs must be able to load artifacts from the
release store as well, for example if a library was released in another submit
and we can reuse it because the script for that library didn't change.
For that, the interface of StoreRoot::join() was changed
- -> Result<FullArtifactPath<'a>>
+ -> Result<Option<FullArtifactPath<'a>>>
whereas it returns an Ok(None) if the artifact requested does not exist.
This was necessary to try-joining a path on the store root of the staging store,
and if there is no such file continue with try-joining the path on the release
store.
The calling code got a bit more complex with that change, though.
Next, the MergedStores got a `derive(Clone)` because clone()ing it is cheap
(two `Arc`s) and necessary to pass it easily to the jobs.
Each instance of the code where the staging store was consulted was changed to
consult the the release store as well.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch rewrites the replacement searching algorithm, to try the staging
store first and then the release store.
It does so by sorting the artifacts by whether they are in the staging store or
not (hence the FullArtifactPath::is_in_staging_store() function).
It filters out not-found artifacts and returns only ones that were found in
either store.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
replacement artifacts
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch fixes error returning from the JobHandle::run() implementation.
Somehow, in the many rewrites, the error returning resulted in code that did not
differentiate between script run errors and scheduling errors.
This patch fixes this by making the JobHandle::run() method return
Result<Result<Vec<ArtifactPath>>>
whereas the outer error is the scheduling result and the inner error is the
script result.
The calling code was adapted accordingly.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Because the JobTask object gets _all_ transitive dependencies as well, this
number might be higher than the expected (direct) dependencies, resulting in
something like
Waiting (11/3)
which is not nice UI-wise. Hence, limit to direct dependencies here.
We cannot count transitive dependencies at this point, so this is the best way
to do it right now.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
|
|
This patch outsources the construction of a JobTask object to a constructor
function.
This fixes a inconvenience, where the progress bar was not updated until the
JobTask began run()ing.
Now we set a message for the progress bar in the constructor, making the user
experience a bit nicer.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
The comment in the code describes the change well enough.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch follows-up on the shrinking of the `Artifact` type and removes it
entirely.
The type is not needed. Only the `ArtifactPath` type is needed, which is a thin
wrapper around `PathBuf`, ensuring that the path is relative to the store root.
The `Artifact` type used `pom` to parse the name and version of the package from
the `ArtifactPath` object it contained, which resulted in the restriction that
the path must always be
<name>-<version>...
Which should not be a requirement and actually caused issues with a package
named "foo-bar" (as an example).
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This fixes a bug which caused butido to crash in case of `else` in the snippet
changed here.
This was because in the `else`-case, we borrow mutably what was already mutably
borrowed, causing a panic.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch changes the implementation to use an HashMap<Uuid, Error> for
error propagation.
The rationale behind this is the same as with the change to HashMap for
the artifacts: Errors are not getting propagated twice if they arrive at
a job from different child jobs.
This is technically not possible yet, because we propagate errors only
to one parent. Though, if the implementation changes one day (which it
could), this is one thing less we have to think about.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
This changes the implementation to use a hashmap for storing the
results.
This way, we are not storing the same result twice.
.-> C -,
/ \
D >-> A
\ /
`-> B -´
In this scenario, D gets the result from A propagated via B and via C.
Because of this, it would propagate the results from A twice to its
caller (the orchestrator itself).
By using a hashmap, we prevent this from happening on the JobTask level,
thus, artifacts are not getting reported to the user twice.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
Tested-by: Matthias Beyer <mail@beyermatthias.de>
|
|
This patch reimplements the result-propagation to allow multiple
senders.
.-> E
/
.-> C -<
/ \
D >-> A
\ /
`-> B -´
In this scenario, A needs to send its results to multiple other jobs.
This patch changes the implementation so that each JobTask has a Vec<_>
of senders, which allows sending results to multiple parents.
In case of error, the error is only propagated to one parent, because it
doesn't matter. If there is an error, we fail and abort the whole tree
anyways, so the other parents don't need to be notified of the error.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
This patch reimplements the package orchestration functionality to rely
on a DAG rather than a tree.
A
/ \
B E
/ \ \
C D F
Before this change, the structure the packages were organized in for a
build was a tree.
That did work reasonable well for initial development of butido, because
this is a simple case and the implementation is rather simple, too.
But, packages and their dependencies are not always organized in a tree.
Most of the time, they are organized in a DAG:
.-> C -,
/ \
D > A
\ /
`-> B -´
This is a real-world example: A could be a common crypto-library that I
do not want to name here.
B and C could be libraries that use the said crypto-library and D could
be a program that use B and C.
Because said crypto-library builds rather long, building it twice and
throwing one result away is a no-go.
A DAG as organizational structure makes that issue go away entirely.
Also, we can later implement checks whether the DAG contains multiple
versions of the same library, if that is undesireable.
The change itself is rather big, frankly because it is a non-trivial
change the replace the whole data structure and its handling in the
orchestrator code.
First of all, we introduce the "daggy" library, which provides the DAG
implementation on top of the popular "petgraph" library.
The package `Tree` datastructure was replaced by a package `Dag`
datastructure. This type implements the heavy-lifting that is needed to
load a package and all its dependencies from the `Repository` object.
The `JobTree` was also reimplemented, but as `daggy::Dag` provides a
convenient `map()` function, its implementation which transforms the
package `Dag` into a job `Dag` is rather trivial.
`crate::job::Dag` then provides the convenience `iter()` function to
iterate over all elements in the DAG and providing a `JobDefinition`
object for each node.
The topology in which we traverse the DAG is not an issue, as we need to
create tasks for all `JobDefinition`s anyways, so we do not care about
traversal topology at all.
The `crate::package::Package` type got an `Hash` implementation, which
is necessary to keep track of the mappings while reading the DAG from
the repository.
The implementation does not create the edges between the nodes in the
DAG right when inserting, but afterwards.
To keep track of the `daggy::NodeIndex`es, it keeps a mapping
Package -> NodeIndex
in a Hashmap. Thus, `Package` must implement `std::hash::Hash`
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
Tested-by: Matthias Beyer <mail@beyermatthias.de>
squash! Reimplement as DAG
|
|
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
This patch changes the propagation of results, so that the UUIDs of the jobs
producing the artifacts are propagated through the whole tree.
This issue at hand was that when having a dependency tree like this:
C -> B -> A
The results from A were propagated to B and the results from B where propagated
to C. But, because the implementation did not do this, the results from A where
included in the results from B and the UUID from A was dropped.
This was an issue because the implementation waited for _all_ dependencies
(direct and transitive) by their job UUID.
This means that C waited on a UUID that described the Job for A, but never
received it, which caused everything to fail.
This patch changes the algorithm, to not only report the own UUID and all
artifacts of a job, but all artifacts with their UUID attached, which solves the
issue.
The root of the tree (the `Orchestrator`) simply drops the UUIDs before
returning the artifacts to its caller.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
constructing error object
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch fixes progress reporting. Because our progress-bar-creating helper
initializes the bar with length 1, we have to set the length here manually.
The bar has to be added to the multibar object right away, because otherwise it
will be rendered to the output which gives us an ugly dead progress bar.
If the length is set after the adding to the multibar object, this does not
happen.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|