Age | Commit message (Collapse) | Author |
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This should be equivalent to the previous version.
The first if-block should be equivalent because of the changed join schema,
which should select all submits for jobs where packages have the appropriate
name.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch adds a configurable timeout value (default 10 seconds)
in the Endpoint struct.
This way we get an error if the connection to the endpoint is stalled.
Signed-off-by: Christoph Prokop <christoph.prokop@atos.net>
Pair-programmed-with: Matthias Beyer <matthias.beyer@atos.net>
Suggested-by: Matthias Beyer <matthias.beyer@atos.net>
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
Signed-off-by: Christoph Prokop <christoph.prokop@atos.net>
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Because renaming does not work if there is a filesystem boundary.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
As the comment in the code explains, we're always loading the endpoints from the
same file and feed it to the scheduler the same way.
Because of that, we always schedule to the first endpoint until it is full and
then to the second.
To have a bit more spreading over the endpoints, shuffle the configurations
before connecting, so this way the scheduler does not always (each time butido
is called) use the same endpoint as first endpoint.
This is not a perfect solution, but a simple and working one.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This function is only needed in the PackageVersionConstraint::parser() function,
thus make it private here so it is not accidentally used somewhere else.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch rewrites the unpacking of the tar from the stream in a way so that
the unpacking function returns the written pathes and therefore we don't have to
pass over the tar twice.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
The issue here is that we copy all build results (packages) in the container to
/outputs and then butido uses that directory to fetch the outputs of the build.
But, because how the docker API works, we get a TAR stream from docker that
_contains_ the /outputs directory. But of course, we don't want that.
Until now, that was no issue. But it has become one now that we start adopting
butido for our real-world scenarios.
This patch adds filtering out that /outputs portion of the pathes from the tar
archive when writing all the things to disc.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
Because commit 5c36c119f9448baf6bfe5245c6ebac1aa09d5b43 was not enough and was
actually buggy, this fixes the patch-file-finding algorithm _again_.
The approach changed a bit. It actually introduces even more overhead to the
loading algorithm, because of constant type-conversions. But it's as good as it
can be right now.
So first of all, we're collecting the patches before the merge into a
Vec<PathBuf>. Each of those are existing.
Then we check whether the new patch exists, and if it not does, we check whether
the file actually exists in the patches from before the merge (by filename). If
it does, it seems that we dragged the entry from the previous recursion. In
this case, the patches from before the merge were valid, and the recursion
invalidated the path.
I.E.:
/pkg.toml with patches = [ "a" ]
/a
/sub/pkg.toml with no patches=[]
in the recursion for /sub/pkg.toml, we get a ["a"] in the patches array, because
that's how the layered configuration works.
But because the path is invalid, we check whether before the merge (so, the
patches array from /pkg.toml) has a file named equally - which it does. So the
array before the merge is the correct one.
I did some tests on this and it seems to work correctly, but more edge-cases may
exist.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
Fixes: 5c36c119f9448baf6bfe5245c6ebac1aa09d5b43 ("Fix: Error out if a patch file is missing")
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch rewrites the filtering for environment variables in the "db jobs"
subcommand.
The output does not contain the environment variables anymore, which is a
drawback, but the implementation is _way_ more easy to understand, which I
consider an bigger advantage.
The implementation now actually queries the database twice: first it gets all
job ids from the JobEnv->EnvVar mapping and then builds a filter based on these
IDs for the job output.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch simplifies the implementation by using the `.into_boxed()` function
from the diesel query builder and appling the filters from the CLI one-by-one.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
Because we can just use the actual interface function here.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
It is easier for the caller (because more visible what happens) to call
`Source::path().exists()`.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch fixes a bug where a patch file is not there.
Before this patch, we were simply ignoring non-existing files in the iterator,
because during development of this algorithm, it seemed to be the right idea
because of the recursion that is happending.
The patch-branch-patching that is happening in the recursion, that rewrites the
pathes to the patches during the recursive loading of the packages, used to
yield invalid pathes at some point, which simply could be ignored. That happened
before that patch.
But because during the development of the recursive loading, the scheme how this
all works was changed, it does not yield invalid pathes anymore.
Hence, we can be sure that either the file is here or it is not - which is an
error then.
I have to say that I'm not particularly good with recursion, but as far as my
tests go, this seems to work as intended now.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch rewrites the source verification output to be one progress bar that
prints all necessary information: That is, only a one-line success message, or
multiple lines in case of error.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch tries to work around a longer-than-1-sec blocking future while
waiting for the channel .recv() call in the LogReceiver.
The issue at hand is: If the channel does not produce log output for longer than
1 second, the progress bar won't be `tick()`ed for that amount of time.
That leads to progress bars that seem to block (no update of the time in the
progress bar output), which might confuse users.
This patch works around that by wrapping the recv() in a timeout future and then
catch the timeout, ping the progress bar and try to `recv()` again.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Not yet perfectly nice, but almost there.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
And replace it with a getset::Getters implementation.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Use the TryFrom trait rather than a `::new()` constructor that can fail.
This is way more idomatic.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Rewrite this function to use the constructor or PackageVersionConstraint instead
of getting the parser and using it, because this shouldn't be allowed anyways.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
This is an embarrassing bug. :-/
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This is because the "sha1" crate seems to be unmaintained (no release in 3
years) but the sha-1 crate seems to be actively maintained in the context of a
bigger rust-crypto-functions project.
Signed-off-by: Matthias Beyer <mail@beyermatthias.de>
Tested-by: Matthias Beyer <mail@beyermatthias.de>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
This patch rewrites the endpoint configuration format to be a map.
The problem here is, that a list of endpoints cannot easily be used with layered
configuration, where a part of the configuration lives in the XDG config home of
the user and the rest lives in the repository.
With this patch, the endpoints are configured with a map instead of an array,
which makes it less complicated to overwrite.
The name of an endpoint is now the key in the map.
A type `EndpointName` was introduced to take advantage of strong typing. Thus,
the patch touches a bit more code to adapt to the new type in use.
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
|
|
Signed-off-by: Matthias Beyer <matthias.beyer@atos.net>
Tested-by: Matthias Beyer <matthias.beyer@atos.net>
|