diff options
author | Paul Emm. Katsoulakis <34388743+paulkatsoulakis@users.noreply.github.com> | 2019-06-09 11:50:54 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2019-06-09 11:50:54 +0300 |
commit | abdd4b0e645681f35e2d5c01fd0f60d0b700965a (patch) | |
tree | 61a2e1245ce32eb2d1bd3cea8e98815caf42783b /.travis | |
parent | 0a509325e3c967ae679ae2f27dc480edb5a1dc77 (diff) |
netdata/packaging: Introducing automatic binary packages generation and delivery for RPM types (Phase 1) (#6223)
* netdata/packaging: Introducing automatic binary packages generation and delivery for RPM types (Phase 1)
With this commit we introduce our packaging toolkits and workflows to automate the delivery for RPM packages to packagecloud.io,
our packaging content delivery infrastructure. At this stage we have only prepared the required flows to publish our latest stable release.
Missing items for now:
1) Packaging versioning - we won't be providing increasing version numbers for the same upstream netdata version
2) Nightlies - our nightlies won't be provided by this channel, yet
With this changeset, we also introduce the basic artifacts for DEB packaging generation, with the same misses mentioned above.
Debian is likely not to be functional within this sprint run though.
Here's a more detailed list of the changes, as they were generated from the original branch:
[33m89baa00b[m netdata/packging: Prior to the PR, place the master bracnh as the place to execute packaging
[33macfd2804[m export variables at before_script
[33mb95b112b[m remove that README.md, we will cover details from the distributions document in packaging folder
[33me52ab82e[m Update README.md
[33m8df1f8e9[m [ci skip] There is no need for package all option. There is absolutely no uniformity on available images or distro per arch to do our job for all at the same time, so remove it for now
[33m6fc1600c[m remove a few of the unavailable architectures from distros
[33m7e2741de[m missed arch, we actually have i686
[33m64c72b4e[m Add arm64 architecture as a build option (to be refined on follow up commit for the distros not having arm64 or amd64 not to trigger build)
[33mead672b4[m silence shellchecking, also fix a shameful recommendation from shellcheck so that array parsing can work
[33m986d0e00[m you disappoint me shellcheck, shame on you
[33ma5c11a75[m remove debug ls command
[33m4b5fade6[m nit - too obvious to spot
[33m7edb9215[m nits and fixes
[33m06cf10b9[m fix the obvious miss - you need to the full path appended before the container name, otherwise no folder..
[33md777f8ae[m Give me a break travis, whats wrong with you -- define packages directory at the yaml config level
[33mf9756642[m nits
[33mee253493[m missed some old parts
[33m1506105c[m we dont need container root variable here, we just need to go through the base LXC directory
[33m03dc231c[m restructure packaging preparation process
[33m529cdedb[m simplify, unify logic around build arch handling between creation and building
[33m17624c14[m pull packages directory variable in the code. We calculate base on arch
[33mc47ea326[m remove redudant package type var, also hard exit when not found the right stuff
[33m8890041d[m Pull more arch dependent variables out of yaml config and into the code
[33m832fbf71[m remove unused variable
[33mb54d6ccb[m Move check lower
[33m0b1a3a7e[m handle build arch requirement, plus some shellcheck fixes
[33m07bb01fe[m missed this, should pass container name instead
[33m1ab8a3a4[m we dont use build arch within this scope, ditch the check
[33m6d3df1b3[m Add i386 packaging keywords
[33m1b403680[m Remove arch from the descriptions and the environment variables We wont be building sections per arch in travis, to minimize config file size. for each distro we will always build both ARCHS unless the commit message instructs us to build specific architecture
[33m8f38caa5[m Drop support of 42.3
[33m8e394906[m its word counting here, focus
[33md3f6009c[m fix yanking command
[33m5b4d7b9a[m dont call gpg flag on all distros, focus
[33m7e0f4d64[m better listing when yanking packages
[33mbdb2d3aa[m skipping gpg checks, for now
[33m75841a67[m use newer distro instead
[33ma439a9f9[m downgrade base dist
[33meba0fba4[m Merge branch 'master' into package-cloud-deployer
[33m241c0e10[m nits and fixeS
[33maad8d8bf[m fix naming, add update process in debian preparation
[33m872b15af[m Separately list rpm and srpm contents for yanking
[33mb3ef3a6f[m el/8 and fedora/31 not ready yet
[33md468cad7[m Null commit
[33m22135a2f[m fix stages for packaging
[33m1464cd6b[m Null commit
[33m5d44d4f9[m implement yanking - when same version is uploaded, remove to re-add
[33m0a4e2e46[m sync the repo first of all
[33m9f71583c[m Generalize repo tool definition on yaml config, so that we adjust per distro as needed
[33m3988eb74[m adjustments on the distro list, also remove coreutils -- not important
[33m51571968[m check if its just permission issue
[33m38fa8b2b[m adjust syntax
[33me08ebd2b[m Due to the peculiarity of the way we do things, script out the packages preparation so that we can sudo them
[33m0a3ce2c1[m Now that we solved all other issues, make sure you add rpm building dependencies to LXC
[33maa0b370e[m a couple of nits and fixes 1) we need sudo within the container 2) just use wget everywhere, dont mix up too many packages for the same thing 3) do not try to add packages again, our dependency script should take care that. Single point of responsibility
[33ma3f42e8d[m You need to request home directory to be created (need to review under the different distros for the right syntax though)
[33md7a3d0d0[m dont go and add manually the required packages, call our dependency scriptlet
[33m5c0888b7[m skip GPG checking for now
[33m9aa64bbe[m dont create from the python lib
[33m19f8cc6b[m Add more keywords for better grouping
[33m0d5b89d7[m nits
[33me260a57d[m python libs not in good shape, try running create from command line instead
[33m7fa58921[m netdata/packaging/ci: Bingo - add container name on the RPM jobs
[33mb99ff89e[m some nits and fixes, lets see
[33m636057b2[m bring back sudo, move to /var/lib/lxc, pass -E to sudo to preserve vars
[33m1e487517[m i wonder..
[33m97583506[m Update SuSE list
[33m2e92649b[m fix doc - wrong distro details
[33m6f53a4c2[m Initial distro document -- WIP
[33md2ef5e4f[m Add some README stuff, also introduce yanking step in pre-deploy (empty script at this point)
[33m465f293f[m Change the token
[33m36431ab5[m Add a package cloud wrapper (required for yanking RPMs)
[33m2142a024[m Auto-detect username for the beta deployment, so that others can use it. Secret key will remain as is for now, will do revisit this also, so that others can beta test the implementation too. Also include provide for the production deployment, when running on master
[33mab5a85ad[m Add more special conditions, also fix spacing and style a little bit more:
[33mce893eae[m build only on special conditions (branch name for now)
[33m82340536[m Attempt to find how to ref template in stages
[33m5375cd93[m one more thing - use ubuntu instead of download template, lets see
[33m4ff26df4[m Attempt to fetch lxc-templates (not installed by default
[33m52715ba2[m Bring back original implementation - tests gave some light to the problem
[33m4607e7b8[m lets see, try different template
[33m4a453425[m add some more debug stuff - will remove them once i figure it out
[33m5ee2e03d[m Try to use the binaries for the creation, see what changes
[33m95a22ef7[m another re-arrangement
[33m82855f91[m Attempt to install a different version of libraries, as per another issue
[33mfbda0d3a[m Attempt to debug weird container failure
[33md828de8c[m debugging
[33m07bad55d[m reduce the noise, when not possible to create/modify the required directories dont do anything
[33m6a2bd46b[m rename
[33mbba22329[m netdata/health: shellchecking - SC2236
[33m103a7df9[m Adjust naming
[33mea70d482[m Run on Xenial
[33mc2b3ad77[m You should sudo create the folder
[33ma2f00c26[m Do not stop the build if one of the RPM fails, we want all of them to attempt to build
[33mc367ab6c[m Cleanup, bring also RPM template in and lets see what happens
[33mba7b441b[m more nits -- add the right replacements
[33m2dfbb3f5[m move templated parts outside
[33m3211bc11[m another approach, missed something
[33m2a47dd17[m A more educated attempt based on other resources
[33m670c35b7[m first time in anchors, most likely is wrong but got to see what travis will do
[33m6c00628e[m Binary release flows and other side fixes
[33m1d89e519[m Just in case, dont clean up if the env is messed up
[33m06b8de6f[m start building up deb structure and flow, do some more re-arrangements also
[33m05164ba6[m longshot - bring the distro vars on the top to attempt to build a matrix. Also a fix: create the package folder prior populating it
[33mc9334c24[m Move RPM code to separate folder, make room for the deb implementations
[33m8474c823[m do not copy over everything, only the RPM directories
[33m13d8ac0a[m netdata/packages: attempt to copy over the folder to a different location
[33m3cb0edf6[m remove temporarily to check deploy process
[33m79d3b69c[m Permissions
[33m05c3d4bd[m Add listing of container fs contents to see whats the result
[33meeee2249[m fix container path, missed to adjust this after changing container name format
[33m90a70b9c[m Add dependencies
[33m34c3e1fa[m Dont forget to create the rpmbuild structure required
[33m9bfb6026[m reverse the order, so that we first add all required packages
[33m663af0e5[m Attempt to add all as user rather than w00t
[33m0fa68530[m more fixes 1) install wget within the container, so that we can then fetch our source 2) factor out the command execution, for cleaner code
[33m23cc4517[m better logging
[33mda58043b[m skip formatting, do the traditional way for now
[33m818508c7[m more fixes 1) escape underscore too. Actually previously the complain was about underscore, i wasnt paying attention 2) Parse all command results and make sure you break the script if something goes wrong, otherwise travis wont know the failures
[33m78b9be3a[m Add more debug messages and also fix the non-escaped character % in string
[33m2037f2c5[m fixes and next steps 1) Use sudo, attempt to make privileged containers this way in case there is something messed up with homedirs in travis (long shot) 2) Implement the build process for the next step (not tested, we 'll see how it works) 3) Change the container name, to something more specific to the build we are preparing to help us identify the container more accurately (and avoid possible conflicts)
[33m9dac37bf[m revert attempt, obviously it didnt provide any value
[33m2d975a7d[m Attempt to catch any stacktrace, if any is raised
[33m93826afe[m Reinstate skip_cleanup variable, as its needed for deploy anyway
[33m54fd126a[m That is weird, but seems like lxc is there on subsequent runs, make sure you clear before you start
[33mf876e6a8[m adjustments 1) add version string statically for now, for testing 2) bring more vars in 3) create the container from python all the way
[33m67df8903[m Flesh out the workflow for building up the builder environment on one distro
[33m0906b41e[m change distro, dont forget
[33m17e78ee1[m fix distro keyword
[33m26ca17d7[m Create an experimental stage and bring onboard an encrypted token
[33m4b0d8ce3[m Introduce an experimental package cloud deployment stage
* netdata/packaging: Remove hardcoded tmp director as per Codacy feedback
* netdata/packaging: Update distributions document
Update DISTRIBUTIONS.md based on Codacy warnings (round 1)
* netdata/packaging: lets try to attend a few errors
* Removing DISTRIBUTIONS.md from this PR
As this is part of a separate task, it will be added on a separate branch and linked to the respective task, to unblock RPM generation
Diffstat (limited to '.travis')
-rw-r--r-- | .travis/README.md | 46 | ||||
-rwxr-xr-x | .travis/package_management/build_package_in_container.sh | 82 | ||||
-rwxr-xr-x | .travis/package_management/create_lxc_for_build.sh | 101 | ||||
-rwxr-xr-x | .travis/package_management/deb/configure_lxc_environment.py | 71 | ||||
-rwxr-xr-x | .travis/package_management/deb/trigger_lxc_build.py | 58 | ||||
-rw-r--r-- | .travis/package_management/functions.sh | 33 | ||||
-rwxr-xr-x | .travis/package_management/package_cloud_wrapper.sh | 48 | ||||
-rwxr-xr-x | .travis/package_management/prepare_packages.sh | 56 | ||||
-rwxr-xr-x | .travis/package_management/rpm/configure_lxc_environment.py | 89 | ||||
-rwxr-xr-x | .travis/package_management/rpm/trigger_lxc_build.py | 62 | ||||
-rwxr-xr-x | .travis/package_management/yank_stale_rpm.sh | 35 |
11 files changed, 681 insertions, 0 deletions
diff --git a/.travis/README.md b/.travis/README.md index 03ac2edd62..b7b61ecb4f 100644 --- a/.travis/README.md +++ b/.travis/README.md @@ -95,3 +95,49 @@ During packaging we are preparing the release changelog information and run the ## Publish for release The publishing stage is the most complex part in publishing. This is the stage were we generate and publish docker images, prepare the release artifacts and get ready with the release draft. + +### Package Management workflows +As part of our goal to provide the best support to our customers, we have created a set of CI workflows to automatically produce +DEB and RPM for multiple distributions. These workflows are implemented under the templated stages '_DEB_TEMPLATE' and '_RPM_TEMPLATE'. +We currently plan to actively support the following Operating Systems, with a plan to further expand this list following our users needs. + +### Operating systems supported +The following distributions are supported +- Debian versions + - Buster (TBD - not released yet, check [debian releases](https://www.debian.org/releases/) for details) + - Stretch + - Jessie + - Wheezy + +- Ubuntu versions + - Disco + - Cosmic + - Bionic + - artful + +- Enterprise Linux versions (Covers Redhat, CentOS, and Amazon Linux with version 6) + - Version 8 (TBD) + - Version 7 + - Version 6 + +- Fedora versions + - Version 31 (TBD) + - Version 30 + - Version 29 + - Version 28 + +- OpenSuSE versions + - 15.1 + - 15.0 + +- Gentoo distributions + - TBD + +### Architectures supported +We plan to support amd64, x86 and arm64 architectures. As of June 2019 only amd64 and x86 will become available, as we are still working on solving issues with the architecture. + +The Package deployment can be triggered manually by executing an empty commit with the following message pattern: `[Package PACKAGE_TYPE PACKAGE_ARCH] DESCRIBE_THE_REASONING_HERE`. +Travis Yaml configuration allows the user to combine package type and architecture as necessary to regenerate the current stable release (For example tag v1.15.0 as of 4th of May 2019) +Sample patterns to trigger building of packages for all AMD64 supported architecture: +- '[Package AMD64 RPM]': Build & publish all amd64 available RPM packages +- '[Package AMD64 DEB]': Build & publish all amd64 available DEB packages diff --git a/.travis/package_management/build_package_in_container.sh b/.travis/package_management/build_package_in_container.sh new file mode 100755 index 0000000000..2719e7b6b3 --- /dev/null +++ b/.travis/package_management/build_package_in_container.sh @@ -0,0 +1,82 @@ +#!/usr/bin/env bash +# +# Entry point for package build process +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis (paul@netdata.cloud) +#shellcheck disable=SC1091 +set -e + +# If we are not in netdata git repo, at the top level directory, fail +TOP_LEVEL=$(basename "$(git rev-parse --show-toplevel)") +CWD=$(git rev-parse --show-cdup) +if [ -n "$CWD" ] || [ ! "${TOP_LEVEL}" == "netdata" ]; then + echo "Run as .travis/package_management/$(basename "$0") from top level directory of netdata git repository" + echo "Docker build process aborted" + exit 1 +fi + +source .travis/package_management/functions.sh || (echo "Failed to load packaging library" && exit 1) + +# Check for presence of mandatory environment variables +if [ -z "${BUILD_STRING}" ]; then + echo "No Distribution was defined. Make sure BUILD_STRING is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILDER_NAME}" ]; then + echo "No builder account and container name defined. Make sure BUILDER_NAME is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILD_DISTRO}" ]; then + echo "No build distro information defined. Make sure BUILD_DISTRO is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILD_RELEASE}" ]; then + echo "No build release information defined. Make sure BUILD_RELEASE is set on the environment before running this script" + exit 1 +fi + +if [ -z "${PACKAGE_TYPE}" ]; then + echo "No build release information defined. Make sure PACKAGE_TYPE is set on the environment before running this script" + exit 1 +fi + +# Detect architecture and load extra variables needed +detect_arch_from_commit + +case "${BUILD_ARCH}" in +"all") + echo "* * * Building all architectures, amd64 and i386 * * *" + echo "Building for amd64.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-amd64" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + .travis/package_management/"${PACKAGE_TYPE}"/trigger_lxc_build.py "${CONTAINER_NAME}" + + echo "Building for arm64.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-arm64" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + .travis/package_management/"${PACKAGE_TYPE}"/trigger_lxc_build.py "${CONTAINER_NAME}" + + echo "Building for i386.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-i386" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + .travis/package_management/"${PACKAGE_TYPE}"/trigger_lxc_build.py "${CONTAINER_NAME}" + + ;; +"amd64"|"arm64"|"i386") + echo "Building for ${BUILD_ARCH}.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-${BUILD_ARCH}" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + .travis/package_management/"${PACKAGE_TYPE}"/trigger_lxc_build.py "${CONTAINER_NAME}" + ;; +*) + echo "Unknown build architecture '${BUILD_ARCH}', nothing to do for build" + exit 1 + ;; +esac + +echo "Build process completed!" diff --git a/.travis/package_management/create_lxc_for_build.sh b/.travis/package_management/create_lxc_for_build.sh new file mode 100755 index 0000000000..83ef9d1fc1 --- /dev/null +++ b/.travis/package_management/create_lxc_for_build.sh @@ -0,0 +1,101 @@ +#!/usr/bin/env bash +# +# This script generates an LXC container and starts it up +# Once the script completes successfully, a container has become available for usage +# The container image to be used and the container name to be set, are part of variables +# that must be present for the script to work +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis (paul@netdata.cloud) +# shellcheck disable=SC1091 +set -e + +source .travis/package_management/functions.sh || (echo "Failed to load packaging library" && exit 1) + +# If we are not in netdata git repo, at the top level directory, fail +TOP_LEVEL=$(basename "$(git rev-parse --show-toplevel)") +CWD=$(git rev-parse --show-cdup) +if [ -n "$CWD" ] || [ ! "${TOP_LEVEL}" == "netdata" ]; then + echo "Run as .travis/package_management/$(basename "$0") from top level directory of netdata git repository" + echo "LXC Container creation aborted" + exit 1 +fi + +# Check for presence of mandatory environment variables +if [ -z "${BUILD_STRING}" ]; then + echo "No Distribution was defined. Make sure BUILD_STRING is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILDER_NAME}" ]; then + echo "No builder account and container name defined. Make sure BUILDER_NAME is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILD_DISTRO}" ]; then + echo "No build distro information defined. Make sure BUILD_DISTRO is set on the environment before running this script" + exit 1 +fi + +if [ -z "${BUILD_RELEASE}" ]; then + echo "No build release information defined. Make sure BUILD_RELEASE is set on the environment before running this script" + exit 1 +fi + +if [ -z "${PACKAGE_TYPE}" ]; then + echo "No build release information defined. Make sure PACKAGE_TYPE is set on the environment before running this script" + exit 1 +fi + +# Detect architecture and load extra variables needed +detect_arch_from_commit + +echo "Creating LXC container ${BUILDER_NAME}/${BUILD_STRING}/${BUILD_ARCH}...." + +case "${BUILD_ARCH}" in +"all") + # i386 + echo "Creating LXC Container for i386.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-i386" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + lxc-create -n "${CONTAINER_NAME}" -t "download" -- --dist "${BUILD_DISTRO}" --release "${BUILD_RELEASE}" --arch "i386" --no-validate + + echo "Container(s) ready. Configuring container(s).." + .travis/package_management/"${PACKAGE_TYPE}"/configure_lxc_environment.py "${CONTAINER_NAME}" + + # amd64 + echo "Creating LXC Container for amd64.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-amd64" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + lxc-create -n "${CONTAINER_NAME}" -t "download" -- --dist "${BUILD_DISTRO}" --release "${BUILD_RELEASE}" --arch "amd64" --no-validate + + echo "Container(s) ready. Configuring container(s).." + .travis/package_management/"${PACKAGE_TYPE}"/configure_lxc_environment.py "${CONTAINER_NAME}" + + # arm64 + echo "Creating LXC Container for arm64.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-arm64" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + lxc-create -n "${CONTAINER_NAME}" -t "download" -- --dist "${BUILD_DISTRO}" --release "${BUILD_RELEASE}" --arch "arm64" --no-validate + + echo "Container(s) ready. Configuring container(s).." + .travis/package_management/"${PACKAGE_TYPE}"/configure_lxc_environment.py "${CONTAINER_NAME}" + ;; +"i386"|"amd64"|"arm64") + # AMD64 or i386 + echo "Creating LXC Container for ${BUILD_ARCH}.." + export CONTAINER_NAME="${BUILDER_NAME}-${BUILD_DISTRO}${BUILD_RELEASE}-${BUILD_ARCH}" + export LXC_CONTAINER_ROOT="/var/lib/lxc/${CONTAINER_NAME}/rootfs" + lxc-create -n "${CONTAINER_NAME}" -t "download" -- --dist "${BUILD_DISTRO}" --release "${BUILD_RELEASE}" --arch "${BUILD_ARCH}" --no-validate + + echo "Container(s) ready. Configuring container(s).." + .travis/package_management/"${PACKAGE_TYPE}"/configure_lxc_environment.py "${CONTAINER_NAME}" + ;; +*) + echo "Unknown BUILD_ARCH value '${BUILD_ARCH}' given, process failed" + exit 1 + ;; +esac + +echo "..LXC creation complete!" diff --git a/.travis/package_management/deb/configure_lxc_environment.py b/.travis/package_management/deb/configure_lxc_environment.py new file mode 100755 index 0000000000..496cdf5aa5 --- /dev/null +++ b/.travis/package_management/deb/configure_lxc_environment.py @@ -0,0 +1,71 @@ +#!/usr/bin/env python3 +# +# Prepare the build environment within the container +# The script attaches to the running container and does the following: +# 1) Create the container +# 2) Start the container up +# 3) Create the builder user +# 4) Prepare the environment for DEB build +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis <paul@netdata.cloud> + +import os +import sys +import lxc + +def run_command(command): + print ("Running command: %s" % command) + command_result = container.attach_wait(lxc.attach_run_command, command) + + if command_result != 0: + raise Exception("Command failed with exit code %d" % command_result) + +if len(sys.argv) != 2: + print ('You need to provide a container name to get things started') + sys.exit(1) +container_name=sys.argv[1] + +# Setup the container object +print ("Defining container %s" % container_name) +container = lxc.Container(container_name) +if not container.defined: + raise Exception("Container %s not defined!" % container_name) + +# Start the container +if not container.start(): + raise Exception("Failed to start the container") + +if not container.running or not container.state == "RUNNING": + raise Exception('Container %s is not running, configuration process aborted ' % container_name) + +# Wait for connectivity +print ("Waiting for container connectivity to start configuration sequence") +if not container.get_ips(timeout=30): + raise Exception("Timeout while waiting for container") + +# Run the required activities now +# 1. Create the builder user +print ("1. Adding user %s" % os.environ['BUILDER_NAME']) +run_command(["useradd", "-m", os.environ['BUILDER_NAME']]) + +# Fetch package dependencies for the build +print ("2. Installing package dependencies within LXC container") +run_command(["apt-get", "update", "-y"]) +run_command(["apt-get", "install", "-y", "sudo"]) +run_command(["apt-get", "install", "-y", "wget"]) +run_command(["apt-get", "install", "-y", "bash"]) +run_command(["wget", "-T", "15", "-O", "~/.install-required-packages.sh", "https://raw.githubusercontent.com/netdata/netdata-demo-site/master/install-required-packages.sh"]) +run_command(["bash", "~/.install-required-packages.sh", "netdata", "--dont-wait", "--non-interactive"]) + +# Download the source +dest_archive="/home/%s/netdata-%s.tar.gz" % (os.environ['BUILDER_NAME'],os.environ['BUILD_VERSION']) +release_url="https://github.com/netdata/netdata/releases/download/%s/netdata-%s.tar.gz" % (os.environ['BUILD_VERSION'], os.environ['BUILD_VERSION']) +print ("3. Fetch netdata source (%s -> %s)" % (release_url, dest_archive)) +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "wget", "-T", "15", "--output-document=" + dest_archive, release_url]) + +print ("4. Extracting directory contents to /home " + os.environ['BUILDER_NAME']) +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "tar", "xf", dest_archive, "-C", "/home/" + os.environ['BUILDER_NAME']]) + +print ('Done!') diff --git a/.travis/package_management/deb/trigger_lxc_build.py b/.travis/package_management/deb/trigger_lxc_build.py new file mode 100755 index 0000000000..839db8e80b --- /dev/null +++ b/.travis/package_management/deb/trigger_lxc_build.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 +# +# This script is responsible for running the RPM build on the running container +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis <paul@netdata.cloud> + +import os +import sys +import lxc + +def run_command(command): + print ("Running command: %s" % command) + command_result = container.attach_wait(lxc.attach_run_command, command) + + if command_result != 0: + raise Exception("Command failed with exit code %d" % command_result) + +print (sys.argv) +if len(sys.argv) != 2: + print ('You need to provide a container name to get things started') + sys.exit(1) +container_name=sys.argv[1] + +# Load the container, break if its not there +print ("Starting up container %s" % container_name) +container = lxc.Container(container_name) +if not container.defined: + raise Exception("Container %s does not exist!" % container_name) + +# Check if the container is running, attempt to start it up in case its not running +if not container.running or not container.state == "RUNNING": + print ('Container %s is not running, attempt to start it up' % container_name) + + # Start the container + if not container.start(): + raise Exception("Failed to start the container") + + if not container.running or not container.state == "RUNNING": + raise Exception('Container %s is not running, configuration process aborted ' % container_name) + +# Wait for connectivity +if not container.get_ips(timeout=30): + raise Exception("Timeout while waiting for container") + +print ("Setting up EMAIL and DEBFULLNAME variables required by the build tools") +os.environ["EMAIL"] = "bot@netdata.cloud" +os.environ["DEBFULLNAME"] = "Netdata builder" + +# Run the build process on the container +print ("Starting DEB build process, running dh-make") +new_version = os.environ["BUILD_VERSION"].replace('v', '') + +print ("Building the package") +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "dpkg-buildpackage", "--host-arch", "amd64", "--target-arch", "amd64", "--post-clean", "--pre-clean", "--build=binary", "--release-by=\"Netdata Builder\"", "--build-by=\"Netdata Builder\""]) + +print ('Done!') diff --git a/.travis/package_management/functions.sh b/.travis/package_management/functions.sh new file mode 100644 index 0000000000..9a467ffe12 --- /dev/null +++ b/.travis/package_management/functions.sh @@ -0,0 +1,33 @@ +# no-shebang-needed-its-a-library +# +# Utility functions for packaging in travis CI +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis (paul@netdata.cloud) +#shellcheck disable=SC2148 +set -e + +function detect_arch_from_commit { + case "${TRAVIS_COMMIT_MESSAGE}" in + "[Package AMD64"*) + export BUILD_ARCH="amd64" + ;; + "[Package i386"*) + export BUILD_ARCH="i386" + ;; + "[Package ALL"*) + export BUILD_ARCH="all" + ;; + "[Package arm64"*) + export BUILD_ARCH="arm64" + ;; + + *) + echo "Unknown build architecture '${BUILD_ARCH}' provided" + exit 1 + ;; + esac + + echo "Detected build architecture ${BUILD_ARCH}" +} diff --git a/.travis/package_management/package_cloud_wrapper.sh b/.travis/package_management/package_cloud_wrapper.sh new file mode 100755 index 0000000000..48a372d37b --- /dev/null +++ b/.travis/package_management/package_cloud_wrapper.sh @@ -0,0 +1,48 @@ +#!/usr/bin/env bash +# +# This is a tool to help removal of packages from packagecloud.io +# It utilizes the package_cloud utility provided from packagecloud.io +# +# Depends on: +# 1) package cloud gem (detects absence and installs it) +# +# Requires: +# 1) PKG_CLOUD_TOKEN variable exported +# 2) To properly install package_cloud when not found, it requires: ruby gcc gcc-c++ ruby-devel +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis (paul@netdata.cloud) +#shellcheck disable=SC2068,SC2145 +set -e +PKG_CLOUD_CONFIG="$HOME/.package_cloud_configuration.cfg" + +# If we are not in netdata git repo, at the top level directory, fail +TOP_LEVEL=$(basename "$(git rev-parse --show-toplevel)") +CWD=$(git rev-parse --show-cdup) +if [ -n "$CWD" ] || [ ! "${TOP_LEVEL}" == "netdata" ]; then + echo "Run as .travis/package_management/$(basename "$0") from top level directory of netdata git repository" + echo "Docker build process aborted" + exit 1 +fi + +# Install dependency if not there +if ! command -v package_cloud > /dev/null 2>&1; then + echo "No package cloud gem found, installing" + gem install -V package_cloud || (echo "Package cloud installation failed. you might want to check if required dependencies are there (ruby gcc gcc-c++ ruby-devel)" && exit 1) +else + echo "Found package_cloud gem, continuing" +fi + +# Check for required token and prepare config +if [ -z "${PKG_CLOUD_TOKEN}" ]; then + echo "Please set PKG_CLOUD_TOKEN to be able to use ${0}" + exit 1 +fi +echo "{\"url\":\"https://packagecloud.io\",\"token\":\"${PKG_CLOUD_TOKEN}\"}" > "${PKG_CLOUD_CONFIG}" + +echo "Executing package_cloud with config ${PKG_CLOUD_CONFIG} and parameters $@" +package_cloud $@ --config="${PKG_CLOUD_CONFIG}" + +rm -rf "${PKG_CLOUD_CONFIG}" +echo "Done!" diff --git a/.travis/package_management/prepare_packages.sh b/.travis/package_management/prepare_packages.sh new file mode 100755 index 0000000000..1fb26a95ed --- /dev/null +++ b/.travis/package_management/prepare_packages.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash +# +# Utility that gathers generated packages, +# puts them together in a local folder for deploy facility to pick up +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis (paul@netdata.cloud) +#shellcheck disable=SC2068 +set -e + +# If we are not in netdata git repo, at the top level directory, fail +TOP_LEVEL=$(basename "$(git rev-parse --show-toplevel)") +CWD=$(git rev-parse --show-cdup) +if [ -n "$CWD" ] || [ ! "${TOP_LEVEL}" == "netdata" ]; then + echo "Run as .travis/package_management/$(basename "$0") from top level directory of netdata git repository" + echo "Package preparation aborted" + exit 1 +fi + +export LXC_ROOT="/var/lib/lxc" + +# Go through the containers created for packaging and pick up all generated packages +CREATED_CONTAINERS=$(ls -A "${LXC_ROOT}") +for d in ${CREATED_CONTAINERS[@]}; do + echo "Picking up packaging contents from ${d}" + + # Pick up any RPMS from builder + RPM_BUILD_PATH="${LXC_ROOT}/${d}/rootfs/home/${BUILDER_NAME}/rpmbuild" + echo "Checking folder ${RPM_BUILD_PATH} for RPMS and SRPMS" + + if [ -d "${RPM_BUILD_PATH}/RPMS" ]; then + echo "Copying any RPMS in '${RPM_BUILD_PATH}', copying over the following:" + ls -ltrR "${RPM_BUILD_PATH}/RPMS" + [[ -d "${RPM_BUILD_PATH}/RPMS/x86_64" ]] && cp -r "${RPM_BUILD_PATH}"/RPMS/x86_64/* "${PACKAGES_DIRECTORY}" + [[ -d "${RPM_BUILD_PATH}/RPMS/i386" ]] && cp -r "${RPM_BUILD_PATH}"/RPMS/i386/* "${PACKAGES_DIRECTORY}" + [[ -d "${RPM_BUILD_PATH}/RPMS/i686" ]] && cp -r "${RPM_BUILD_PATH}"/RPMS/i686/* "${PACKAGES_DIRECTORY}" + fi + + if [ -d "${RPM_BUILD_PATH}/SRPMS" ]; then + echo "Copying any SRPMS in '${RPM_BUILD_PATH}', copying over the following:" + ls -ltrR "${RPM_BUILD_PATH}/SRPMS" + [[ -d "${RPM_BUILD_PATH}/SRPMS/x86_64" ]] && cp -r "${RPM_BUILD_PATH}"/SRPMS/x86_64/* "${PACKAGES_DIRECTORY}" + [[ -d "${RPM_BUILD_PATH}/SRPMS/i386" ]] && cp -r "${RPM_BUILD_PATH}"/SRPMS/i386/* "${PACKAGES_DIRECTORY}" + [[ -d "${RPM_BUILD_PATH}/SRPMS/i686" ]] && cp -r "${RPM_BUILD_PATH}"/SRPMS/i686/* "${PACKAGES_DIRECTORY}" + fi + + # Pick up any DEBs from builder + DEB_BUILD_PATH="${d}/home/${BUILDER_NAME}/build-area" + echo "Checking folder ${DEB_BUILD_PATH} for DEB packages" + #TODO: During debian clean up we 'll fill this up + +done + +chmod -R 777 "${PACKAGES_DIRECTORY}" +echo "Packaging contents ready to ship!" diff --git a/.travis/package_management/rpm/configure_lxc_environment.py b/.travis/package_management/rpm/configure_lxc_environment.py new file mode 100755 index 0000000000..bed75d7489 --- /dev/null +++ b/.travis/package_management/rpm/configure_lxc_environment.py @@ -0,0 +1,89 @@ +#!/usr/bin/env python3 +# +# Prepare the build environment within the container +# The script attaches to the running container and does the following: +# 1) Create the container +# 2) Start the container up +# 3) Create the builder user +# 4) Prepare the environment for RPM build +# +# Copyright: SPDX-License-Identifier: GPL-3.0-or-later +# +# Author : Pavlos Emm. Katsoulakis <paul@netdata.cloud> + +import os +import sys +import lxc + +def run_command(command): + print ("Running command: %s" % command) + command_result = container.attach_wait(lxc.attach_run_command, command) + + if command_result != 0: + raise Exception("Command failed with exit code %d" % command_result) + +if len(sys.argv) != 2: + print ('You need to provide a container name to get things started') + sys.exit(1) +container_name=sys.argv[1] + +# Setup the container object +print ("Defining container %s" % container_name) +container = lxc.Container(container_name) +if not container.defined: + raise Exception("Container %s not defined!" % container_name) + +# Start the container +if not container.start(): + raise Exception("Failed to start the container") + +if not container.running or not container.state == "RUNNING": + raise Exception('Container %s is not running, configuration process aborted ' % container_name) + +# Wait for connectivity +print ("Waiting for container connectivity to start configuration sequence") +if not container.get_ips(timeout=30): + raise Exception("Timeout while waiting for container") + +# Run the required activities now +# Create the builder user +print ("1. Adding user %s" % os.environ['BUILDER_NAME']) +run_command(["useradd", "-m", os.environ['BUILDER_NAME']]) + +# Fetch package dependencies for the build +print ("2. Installing package dependencies within LXC container") +if str(os.environ["REPO_TOOL"]).count("zypper") == 1: + run_command([os.environ["REPO_TOOL"], "clean", "-a"]) + run_command([os.environ["REPO_TOOL"], "--no-gpg-checks", "update", "-y"]) +else: + run_command([os.environ["REPO_TOOL"], "update", "-y"]) + +run_command([os.environ["REPO_TOOL"], "install", "-y", "sudo"]) +run_command([os.environ["REPO_TOOL"], "install", "-y", "wget"]) +run_command([os.environ["REPO_TOOL"], "install", "-y", "bash"]) +run_command(["wget", "-T", "15", "-O", "~/.install-required-packages.sh", "https://raw.githubusercontent.com/netdata/netdata-demo-site/master/install-required-packages.sh"]) +run_command(["bash", "~/.install-required-packages.sh", "netdata", "--dont-wait", "--non-interactive"]) + +print ("3. Setting up macros") +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "/bin/echo", "'%_topdir %(echo /home/" + os.environ['BUILDER_NAME'] + ")/rpmbuild' > /home/" + os.environ['BUILDER_NAME'] + "/.rpmmacros"]) + +print ("4. Create rpmbuild directory") +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "mkdir", "-p", "/home/" + os.environ['BUILDER_NAME'] + "/rpmbuild/BUILD"]) +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "mkdir", "-p", "/home/" + os.environ['BUILDER_NAME'] + "/rpmbuild/RPMS"]) +run_command(["sudo", "-u", os.environ['BUILDER_NAME'], "mkdir", "-p", "/home/" + os.environ['BUILDER_NAME'] + "/rpmbuild/SOURCES"]) + |