Jenkins Pipeline Shared library, that contains additional features for Git, Maven, etc. in an object-oriented manner as well as some additional pipeline steps.
- Usage
- Syntax completion
- Maven
- Gradle
- Git
- Docker
- Dockerfile
- SonarQube
- Changelog
- GitHub
- GitFlow
- SCM-Manager
- HttpClient
- K3d
- DoguRegistry
- Bats
- Makefile
- Markdown
- Steps
- Examples
- Install Pipeline: GitHub Groovy Libraries
- Use the Library in any Jenkinsfile like so
@Library('github.com/cloudogu/[email protected]')
import com.cloudogu.ces.cesbuildlib.*
- Best practice: Use a defined version (e.g. a git commit hash or a git tag, such as
6cd41e0
or1.67.0
in the example above) and not a branch such asdevelop
. Otherwise, your build might change when the there is a new commit on the branch. Using branches is like using snapshots! - When build executors are docker containers and you intend to use their Docker host in the Pipeline: Please see #8.
You can get syntax completion in your Jenkinsfile
when using the ces-build-lib, by adding it as dependency to your project.
You can get the source.jar from JitPack.
With Maven this can be done like so:
- Define the JitPack repository:
<repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories>
- And the ces-build-lib dependency:
<dependency> <!-- Shared Library used in Jenkins. Including this in maven provides code completion in Jenkinsfile. --> <groupId>com.github.cloudogu</groupId> <artifactId>ces-build-lib</artifactId> <!-- Keep this version in sync with the one used in Jenkinsfile --> <version>888733b</version> <!-- Don't ship this dependency with the app --> <optional>true</optional> <!-- Don't inherit this dependency! --> <scope>provided</scope> </dependency>
Or you can download the file (and sources) manually and add them to your IDE. For example:
https://jitpack.io/com/github/cloudogu/ces-build-lib/9fa7ac4/ces-build-lib-9fa7ac4.jar
https://jitpack.io/com/github/cloudogu/ces-build-lib/9fa7ac4/ces-build-lib-9fa7ac4-sources.jar
Current version is .
For further details and options refer to the JitPack website.
This is confirmed to work with IntelliJ IDEA.
Run maven from a local tool installation on Jenkins.
See MavenLocal
def mvnHome = tool 'M3'
def javaHome = tool 'OpenJDK-8'
Maven mvn = new MavenLocal(this, mvnHome, javaHome)
stage('Build') {
mvn 'clean install'
}
Run maven using a Maven Wrapper from the local repository.
Similar to MavenLocal
you can use the Maven Wrapper with a JDK from a local tool installation on Jenkins:
def javaHome = tool 'OpenJDK-8'
Maven mvn = new MavenWrapper(this, javaHome)
stage('Build') {
mvn 'clean install'
}
It is also possible to not specify a JDK tool and use the Java Runtime on the Build agent's PATH
. However,
experience tells us that this is absolutely non-deterministic and will result in unpredictable behavior.
So: Better set an explicit Java tool to be used or use MavenWrapperInDocker.
Maven mvn = new MavenWrapper(this)
stage('Build') {
mvn 'clean install'
}
Run maven in a docker container. This can be helpful, when
- constant ports are bound during the build that cause port conflicts in concurrent builds. For example, when running integration tests, unit tests that use infrastructure that binds to ports or
- one maven repo per builds is required For example when concurrent builds of multi module project install the same snapshot versions.
The builds are run inside the official maven containers from Dockerhub
See MavenInDocker
Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8")
stage('Build') {
mvn 'clean install'
}
It's also possible to use the MavenWrapper in a Docker Container. Here, the Docker container is responsible for providing the JDK.
Maven mvn = MavenWrapperInDocker(this, 'adoptopenjdk/openjdk11:jdk-11.0.10_9-alpine')
stage('Build') {
mvn 'clean install'
}
Since Oracle's announcement of shorter free JDK support, plenty of JDK images have appeared on public container image
registries, where adoptopenjdk
is just one option. The choice is yours.
The following features apply to plain Maven as well as Maven Wrapper in Docker.
If you run Docker from your maven build, because you use the docker-maven-plugin for example, you can connect the docker socket through to your docker in maven like so:
stage('Unit Test') {
// The UI module build runs inside a docker container, so pass the docker host to the maven container
mvn.enableDockerHost = true
mvn docker:start
// Don't expose docker host more than necessary
mvn.enableDockerHost = false
}
There are some security-related concerns about this. See Docker.
If you would like to use Jenkin's local maven repo (or more accurate the one of the build executor, typically at /home/jenkins/.m2
)
instead of a maven repo per job (within each workspace), you can use the following options:
Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8")
mvn.useLocalRepoFromJenkins = true
This speed speeds up the first build and uses less memory. However, concurrent builds of multi module projects building the same version (e.g. a SNAPSHOT), might overwrite their dependencies, causing non-deterministic build failures.
It is possible to set credentials for a registry login by setting a credentialsId and custom image with registry prefix.
Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8") // uses image: maven:3.5.0-jdk-8 from DockerHub
Maven mvn1 = new MavenInDocker(this, "mirror.gcr.io/maven:latest") // uses image: maven:latest from Google
Maven mvn2 = new MavenInDocker(this, "3.5.0-jdk-8" , credentialsId) // loads the username and password credentials from jenkins
The default is the default maven behavior /home/jenkins/.m2
is used.
If you want to use a separate maven repo per Workspace (e.g. to avoid concurrent builds overwriting
dependencies of multi module projects building the same version (e.g. a SNAPSHOT) the following will work:
mvn.additionalArgs += " -Dmaven.repo.local=${env.WORKSPACE}/.m2"
If you need to execute more steps inside the maven container you can pass a closure to your maven instance that is lazily evaluated within the container. The String value returned are the maven arguments.
Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8"),
echo "Outside Maven Container! ${new Docker(this).findIp()}"
mvn {
echo "Insinde Maven Container! ${new Docker(this).findIp()}"
'clean package -DskipTests'
}
You can define maven mirrors as follows:
Maven.useMirrors([name: 'maven-proxy', mirrorOf: 'central', url: 'https://maven.example.org'],
[name: 'google-maven', mirrorOf: 'central', url: 'https://maven-central.storage.googleapis.com/maven2/'],
)
If you specified one or more <repository>
in your pom.xml
that requires authentication, you can pass these
credentials to your ces-build-lib Maven
instance like so:
mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'],
[id: 'another', credentialsId: 'nexusSystemUserCredential'])
Note that the id
must match the one specified in your pom.xml
and the credentials ID must belong to a username and
password credential defined in Jenkins.
ces-build-lib makes deploying to nexus repositories easy, even when it includes signing of the artifacts and usage of the nexus staging plugin (as necessary for Maven Central or other Nexus repository pro instances).
The most simple use case is to deploy to a nexus repo (not Maven Central):
- Just set the repository using
Maven.useRepositoryCredentials()
by passing a nexus username and password/access token as jenkins username and password credential and- either a repository ID that matches a
<distributionManagement><repository>
(or<snapshotRepository>
, examples bellow) defined in yourpom.xml
(then, nourl
ortype
parameters are needed)
(distributionManagement
>snapshotRepository
orrepository
(depending on theversion
) >id
) - or a repository ID (you can choose) and the URL.
In this case you can alos specifiy atype: 'Nexus2'
(defaults to Nexus3) - as the base-URLs differ. This approach is deprecated and might be removed from ces-build-lib in the future.
- either a repository ID that matches a
- Call
Maven.deployToNexusRepository()
. And that is it.
Simple Example:
# <distributionManagement> in pom.xml (preferred)
mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'])
# Alternative: Distribution management via Jenkins (deprecated)
mvn.useRepositoryCredentials([id: 'ces', url: 'https://ecosystem.cloudogu.com/nexus', credentialsId: 'nexusSystemUserCredential', type: 'Nexus2'])
# Deploy
mvn.deployToNexusRepository()
Note that if the pom.xml's version contains -SNAPSHOT
, the artifacts are automatically deployed to the
snapshot repository (e.g. on oss.sonatype.org). Otherwise,
the artifacts are deployed to the release repository (e.g. on oss.sonatype.org).
If you want to sign the artifacts before deploying, just set the credentials for signing before deploying, using
Maven.setSignatureCredentials()
passing the secret key as ASC file (as jenkins secret file credential) and the
passphrase (as jenkins secret text credential).
An ASC file can be exported via gpg --export-secret-keys -a ABCDEFGH > secretkey.asc
.
See Working with PGP Signatures
Another option is to use the nexus-staging-maven-plugin instead of the default maven-deploy-plugin. This is useful if you deploy to a Nexus repository pro, such as Maven Central.
Just use the Maven.deployToNexusRepositoryWithStaging()
instead of Maven.deployToNexusRepository()
.
When deploying to Maven Central, make sure that your pom.xml
adheres to the requirements by Maven Central, as stated
here.
Note that as of nexus-staging-maven-plugin version 1.6.8, it does seem to read the distribution repositories from pom.xml only.
That is, you need to specify them in your pom.xml, they cannot be passed by the ces-build-lib. So for example for maven central you need to add the following:
<distributionManagement>
<snapshotRepository>
<id>ossrh</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
</snapshotRepository>
<repository>
<id>ossrh</id>
<url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url>
</repository>
</distributionManagement>
In addition you either have to pass an url
to useRepositoryCredentials()
or specify the nexus-staging-maven plugin in your pom.xml:
<plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<!-- ... -->
<configuration>
<serverId>ossrh</serverId>
<nexusUrl>https://oss.sonatype.org/</nexusUrl>
</configuration>
</plugin>
Either way, the repository ID (here: ossrh
) and the base nexus URL (here: https://oss.sonatype.org
) in
distributionManagement
and nexus-staging-maven plugin
must conform to each other.
Summing up, here is an example for deploying to Maven Central:
// url is optional, if described in nexus-staging-maven-plugin in pom.xml
mvn.useRepositoryCredentials([id: 'ossrh', url: 'https://oss.sonatype.org', credentialsId: 'mavenCentral-UsernameAndAcccessTokenCredential', type: 'Nexus2'])
mvn.setSignatureCredentials('mavenCentral-secretKey-asc-file','mavenCentral-secretKey-Passphrase')
mvn.deployToNexusRepositoryWithStaging()
Note that the staging of releases might well take 10 minutes. After that, the artifacts are in the release repository, which is later (feels like nightly) synced to Maven Central.
For an example see cloudogu/command-bus.
Similar to deploying artifacts as described above, we can also easily deploy a Maven site to a "raw" maven repository.
Note that the site plugin does not provide options to specify the target repository via the command line. That is, it has to be configured in the pom.xml like so:
<distributionManagement>
<site>
<id>ces</id>
<name>site repository cloudogu ecosystem</name>
<url>dav:https://your.domain/nexus/repository/Site-repo/${project.groupId}/${project.artifactId}/${project.version}/</url>
</site>
</distributionManagement>
Where Site-repo
is the name of the raw repository that must exist in Nexus to succeed.
Then, you can deploy the site as follows:
mvn.useRepositoryCredentials([id: 'ces', credentialsId: 'nexusSystemUserCredential'])
mvn.deploySiteToNexus()
Where
- the
id
parameter must match the one specified in thepom.xml
(ces
in the example above), - the nexus username and password/access token are passed as jenkins username and password credential
(
nexusSystemUserCredential
). - there is no difference between Nexus 2 and Nexus 3 regarding site deployments.
For an example see cloudogu/continuous-delivery-slides-example
Another option for deployToNexusRepositoryWithStaging()
and deployToNexusRepository()
is to pass additional maven
arguments to the deployment like so: mvn.deployToNexusRepositoryWithStaging('-X')
(enables debug output).
Available from both local Maven and Maven in Docker.
mvn.getVersion()
mvn.getArtifactId()
mvn.getGroupId()
mvn.getMavenProperty('project.build.sourceEncoding')
See Maven
It's also possible to use a GradleWrapper in a Docker Container. Here, the Docker container is responsible for providing the JDK.
Example:
String gradleDockerImage = 'openjdk:11.0.10-jdk'
Gradle gradlew = new GradleWrapperInDocker(this, gradleDockerImage)
stage('Build') {
gradlew "clean build"
}
Since Oracle's announcement of shorter free JDK support, plenty of JDK images have appeared on public container image
registries, where adoptopenjdk
is just one option. The choice is yours.
See Maven in Docker for passing credentials to the registry.
An extension to the git
step, that provides an API for some commonly used git commands and utilities.
Mostly, this is a convenient wrapper around using the sh 'git ...'
calls.
Example:
Git git = new Git(this)
stage('Checkout') {
git 'https://your.repo'
/* Don't remove folders starting in "." like .m2 (maven), .npm, .cache, .local (bower), etc. */
git.clean('".*/"')
}
You can optionally pass usernamePassword
(i.e. a String containing the ID that refers to the
Jenkins credentials) to Git
during construction.
These are then used for cloning and pushing.
Note that the username and passwort are processed by a shell. Special characters in username or password might cause
errors like Unterminated quoted string
. So it's best to use a long password that only contains letters and numbers
for now.
Git annonymousGit = new Git(this)
Git gitWithCreds = new Git(this, 'ourCredentials')
annonymousGit 'https://your.repo'
gitWithCreds 'https://your.repo' // Implicitly passed credentials
git.clean()
- Removes all untracked and unstaged files.git.clean('".*/"')
- Removes all untracked and unstaged files, except folders starting in "." like .m2 (maven), .npm, .cache, .local (bower), etc.git.branchName
- e.g.feature/xyz/abc
git.simpleBranchName
- e.g.abc
git.commitAuthorComplete
- e.g.User Name <[email protected]>
git.commitAuthorEmail
- e.g.[email protected]
git.commitAuthorName
- e.g.User Name
git.commitMessage
- Last commit message e.g.Implements new functionality...
git.commitHash
- e.g.fb1c8820df462272011bca5fddbe6933e91d69ed
git.commitHashShort
- e.g.fb1c882
git.areChangesStagedForCommit()
-true
if changes are staged for commit. Iffalse
,git.commit()
will fail.git.repositoryUrl
- e.g.https://github.com/orga/repo.git
git.gitHubRepositoryName
- e.g.orga/repo
- Tags - Note that the git plugin might not fetch tags for all builds. Run
sh "git fetch --tags"
to make sure.git.tag
- e.g.1.0.0
or empty if not setgit.isTag()
- is there a tag on the current commit?
Note that most changing operations offer parameters to specify an author. Theses parameters are optional. If not set the author of the last commit will be used as author and committer. You can specify a different committer by setting the following fields:
git.committerName = 'the name'
git.committerEmail = '[email protected]
It is recommended to set a different committer, so it's obvious those commits were done by Jenkins in the name of the author. This behaviour is implemented by GitHub for example when committing via the Web UI.
git.checkout('branchname')
git.checkoutOrCreate('branchname')
- Creates new Branch if it does not existgit.add('.')
git.commit('message', 'Author', '[email protected])
git.commit('message')
- uses default author/committer (see above).git.setTag('tag', 'message', 'Author', '[email protected])
git.setTag('tag', 'message')
- uses default author/committer (see above).git.fetch()
git.pull()
- pulls, and in case of merge, uses default author/committer (see above).git.pull('refspec')
- pulls specific refspec (e.g.origin master
), and in case of merge, uses the name and email of the last committer as author and committer.git.pull('refspec', 'Author', '[email protected])
git.merge('develop', 'Author', '[email protected])
git.merge('develop')
- uses default author/committer (see above).git.mergeFastForwardOnly('master')
git.push('origin master')
- pushes origin Note: This always prependsorigin
if not present for historical reasonse (see #44). That is, right now it is impossible to push other remotes.
This will change in the next major version of ces-build-lib.
This limitation does not apply to other remote-related operations such aspull()
,fetch()
andpushAndPullOnFailure()
So it's recommended to explicitly mention the origin and not just the refsepc:- Do:
git.push('origin master')
- Don't:
git.push('master')
because this will no longer work in the next major version.
- Do:
git.pushAndPullOnFailure('refspec')
- pushes and pulls if push failed e.g. because local and remote have diverged, then tries pushing again
The Docker
class provides the default methods of the global docker variable provided by docker plugin:
withRegistry(url, credentialsId = null, Closure body)
: Specifies a registry URL such ashttps://docker.mycorp.com/
, plus an optional credentials ID to connect to it.
Example:def dockerImage = docker.build("image/name:1.0", "folderOfDockfile") docker.withRegistry("https://your.registry", 'credentialsId') { dockerImage.push() }
withServer(uri, credentialsId = null, Closure body)
: Specifies a server URI such astcp:https://swarm.mycorp.com:2376
, plus an optional credentials ID to connect to it.withTool(toolName, Closure body)
: Specifies the name of a Docker installation to use, if any are defined in Jenkins global configuration. If unspecified, docker is assumed to be in the$PATH
of the Jenkins agent.image(id)
: Creates an Image object with a specified name or ID.
Example:The image returned by thedocker.image('google/cloud-sdk:164.0.0').inside("-e HOME=${pwd()}") { sh "echo something" }
Docker
class has additional features see bellow.build(image, args)
: Runs docker build to create and tag the specified image from a Dockerfile in the current directory. Additional args may be added, such as'-f Dockerfile.other --pull --build-arg http_proxy=https://192.168.1.1:3128 .'
. Like docker build, args must end with the build context.
Example:def dockerContainer = docker.build("image/name:1.0", "folderOfDockfile").run("-e HOME=${pwd()}")
The Docker
class provides additional convenience features:
String findIp(container)
returns the IP address for a docker container instanceString findIp()
returns the IP address in the current context: the docker host ip (when outside of a container) or the ip of the container this is running inString findDockerHostIp()
returns the IP address of the docker host. Should work both, if running inside or outside a containerString findEnv(container)
returns the environment variables set within the docker container as stringboolean isRunningInsideOfContainer()
returntrue
if this step is executed inside a container, otherwisefalse
boolean isRunning(container)
returntrue
if the container is in state running, otherwisefalse
Example from Jenkinsfile:
Docker docker = new Docker(this)
def dockerContainer = docker.build("image/name:1.0").run()
waitUntil {
sleep(time: 10, unit: 'SECONDS')
return docker.isRunning(dockerContainer)
}
echo docker.findIp(dockerContainer)
echo docker.findEnv(dockerContainer)
id
: The image name with optional tag (mycorp/myapp, mycorp/myapp:latest) or ID (hexadecimal hash).inside(String args = '', Closure body)
: LikewithRun
this starts a container for the duration of the body, but all external commands (sh) launched by the body run inside the container rather than on the host. These commands run in the same working directory (normally a Jenkins agent workspace), which means that the Docker server must be on localhost.pull
: Runs docker pull. Not necessary beforerun
,withRun
, orinside
.run(String args = '', String command = "")
: Usesdocker run
to run the image, and returns a Container which you could stop later. Additional args may be added, such as'-p 8080:8080 --memory-swap=-1'
. Optional command is equivalent to Docker command specified after theimage()
. Records a run fingerprint in the build.withRun(String args = '', String command = "", Closure body)
: Likerun
but stops the container as soon as its body exits, so you do not need a try-finally block.tag(String tagName = image().parsedId.tag, boolean force = true)
: Runs docker tag to record a tag of this image (defaulting to the tag it already has). Will rewrite an existing tag if one exists.push(String tagName = image().parsedId.tag, boolean force = true)
: Pushes an image to the registry after tagging it as with the tag method. For example, you can useimage().push 'latest'
to publish it as the latest version in its repository.
-
repoDigests()
: Returns the repo digests, a content addressable unique digest of an image that was pushed to or pulled from repositories.
If the image was built locally and not pushed, returns an empty list.
If the image was pulled from or pushed to a repo, returns a list containing one item.
If the image was pulled from or pushed to multiple repos, might also contain more than one digest. -
mountJenkinsUser()
: Setting this totrue
provides the user that executes the build within docker container's/etc/passwd
. This is necessary for some commands such as npm, ansible, git, id, etc. Those might exit with errors withouta user present.Why?
Note that Jenkins starts Docker containers in the pipeline with the -u parameter (e.g.-u 1042:1043
). That is, the container does not run as root (which is a good thing from a security point of view). However, the userID/UID (e.g.1042
) and the groupID/GID (e.g.1043
) will most likely not be present within the container which causes errors in some executables.How?
Setting this will cause the creation of apasswd
file that is mounted into a container started from thisimage()
(triggered byrun()
,withRun()
andinside()
methods). Thispasswd
file contains the username, UID, GID of the user that executes the build and also sets the current workspace asHOME
within the docker container. -
mountDockerSocket()
: Setting this totrue
mounts the docker socket into the container.
This allows the container to start other containers "next to" itself, that is "sibling" containers. Note that this is similar but not the same as "Docker In Docker".Note that this will make the docker host socket accessible from within the the container. Use this wisely. Some people say, you should not do this at all. On the other hand, the alternative would be to run a real docker host in docker a docker container, aka "docker in docker" or "dind" (which is possible. On this, however, other people say, you should not do this at all. So lets stick to mounting the socket, which seems to cause less problems.
This is also used by MavenInDocker
-
installDockerClient(String version)
: Installs the docker client with the specified version inside the container. If no version parameter is passed, the lib tries to query the server version by callingdocker version
.
This can be called in addition to mountDockerSocket(), when the "docker" CLI is required on the PATH.For available versions see here.
Examples:
Docker Container that uses its own docker client:
new Docker(this).image('docker') // contains the docker client binary
.mountJenkinsUser()
.mountDockerSocket()
.inside() {
sh 'whoami' // Would fail without mountJenkinsUser = true
sh 'id' // Would fail without mountJenkinsUser = true
// Start a "sibling" container and wait for it to return
sh 'docker run hello-world' // Would fail without mountDockerSocket = true
}
Docker container that does not have its own docker client
new Docker(this).image('kkarczmarczyk/node-yarn:8.0-wheezy')
.mountJenkinsUser()
.mountDockerSocket()
.installDockerClient('17.12.1')
.inside() {
// Start a "sibling" container and wait for it to return
sh 'docker run hello-world' // Would fail without mountDockerSocket = true & installDockerClient()
}
- If you should need to add addition arguments to
docker run
you can do so globally by settingADDITIONAL_DOCKER_RUN_ARGS
as global properties athttps://your-jenkins/manage/configure#global-properties
.
This can be used to globally fix certain bugs in Jenkins agents or their docker config.
The Dockerfile
class provides functions to lint Dockerfiles. For example:
stage('Lint') {
Dockerfile dockerfile = new Dockerfile(this)
dockerfile.lint() // Lint with default configuration
dockerfile.lintWithConfig() // Use your own hadolint configuration with a .hadolint.yaml configuration file
}
The tool hadolint is used for linting. It has a lot of configuration parameters
which can be set by creating a .hadolint.yaml
file in your working directory.
See https://github.com/hadolint/hadolint#configure
When analyzing code with SonarQube there are a couple of challenges that are solved using ces-build-lib's
SonarQube
class:
- Setting the branch name (note that this only works in Jenkins multi-branch pipeline builds, regular pipelines don't have information about branches - see #11)
- Analysis for Pull Requests
- Commenting on Pull Requtests
- Updating commit status in GitHub for Pull Requests
- Using the SonarQube branch plugin (SonarQube 6.x, developer edition and sonarcloud.io)
In general, you can analyse with or without the SonarQube Plugin for Jenkins:
new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins'])
requires the SonarQube plugin and the SonarQube serversonarQubeServerSetupInJenkins
setup up in your Jenkins instance. You can do this here:https://yourJenkinsInstance/configure
.new SonarQube(this, [token: 'secretTextCred', sonarHostUrl: 'https://ces/sonar'])
does not require the plugin and uses an access token, stored as secret text credentialsecretTextCred
in your Jenkins instance.new SonarQube(this, [usernamePassword: 'usrPwCred', sonarHostUrl: 'https://ces/sonar'])
does not require the plugin and uses a SonarQube user account, stored as username with password credentialusrPwCred
in your Jenkins instance.
With the SonarQube
instance you can now analyze your code. When using the plugin (i.e. sonarQubeEnv
) you can also
wait for the quality gate status, that is computed by SonarQube asynchronously. Note that this does not work for token
and usernamePassword
.
stage('Statical Code Analysis') {
def sonarQube = new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins'])
sonarQube.analyzeWith(new MavenInDocker(this, "3.5.0-jdk-8"))
sonarQube.timeoutInMinutes = 4
if (!sonarQube.waitForQualityGateWebhookToBeCalled()) {
unstable("Pipeline unstable due to SonarQube quality gate failure")
}
}
Note that
- Calling
waitForQualityGateWebhookToBeCalled()
requires a WebHook to be setup in your SonarQube server (globally or per project), that notifies Jenkins (url:https://yourJenkinsInstance/sonarqube-webhook/
).
See SonarQube Scanner for Jenkins. - Jenkins will wait for the webhook with a default timeout of 2 minutes, for big projects this might be to short and can be configured with the
timeoutInMinutes
property. - Calling
waitForQualityGateWebhookToBeCalled()
will only work when an analysis has been performed in the current job, i.e.analyzeWith()
has been called and in conjuction withsonarQubeEnv
. - When used in conjunction with SonarQubeCommunity/sonar-build-breaker,
waitForQualityGateWebhookToBeCalled()
will fail your build, if quality gate is not passed. - For now,
SonarQube
can only analyze usingMaven
. Extending this to use the plain SonarQube Runner in future, should be easy, however.
By default, the SonarQube
legacy logic, of creating one project per branch in a Jenkins Multibranch Pipeline project.
A more convenient alternative is the paid-version-only Branch Plugin or the sonarqube-community-branch-plugin, which has similar features but is difficult to install, not supported officially and does not allow for migration to the official branch plugin later on.
You can enable either branch plugins like so:
sonarQube.isUsingBranchPlugin = true
sonarQube.analyzeWith(mvn)
The branch plugin is using master
as integration branch, if you want use a different branch as master
you have to use the integrationBranch
parameter e.g.:
def sonarQube = new SonarQube(this, [sonarQubeEnv: 'sonarQubeServerSetupInJenkins', integrationBranch: 'develop'])
sonarQube.isUsingBranchPlugin = true
sonarQube.analyzeWith(mvn)
Note that using the branch plugin requires a first analysis without branches.
You can do this on Jenkins or locally.
On Jenkins, you can achieve this by setting the following for the first run:
sonarQube.isIgnoringBranches = true
sonarQube.analyzeWith(mvn)
Recommendation: Use Jenkins' replay feature for this. Then commit the Jenkinsfile
with isUsingBranchPlugin
.
An alternative is running the first analysis locally, e.g. with maven
mvn clean install sonar:sonar -Dsonar.host.url=https://sonarcloud.io -Dsonar.organization=YOUR-ORG -Dsonar.login=YOUR-TOKEN
SonarCloud is a public SonarQube instance that has some extra features, such as PullRequest
decoration for GitHub, BitBucket, etc.
ces-build-lib encapsulates the setup in SonarCloud
class.
It works just like SonarQube
, i.e. you can create it using sonarQubeEnv
, token
, etc. and it provides the analyzeWith()
and
waitForQualityGateWebhookToBeCalled()
methods.
The only difference: You either have to pass your organization ID using the sonarOrganization: 'YOUR_ID'
parameter
during construction, or configure it under https://yourJenkinsInstance/configure
as "Additional analysis properties"
(hit the "Advanced..." button to get there): sonar.organization=YOUR_ID
.
Example using SonarCloud:
def sonarQube = new SonarCloud(this, [sonarQubeEnv: 'sonarcloud.io', sonarOrganization: 'YOUR_ID'])
sonarQube.analyzeWith(new MavenInDocker(this, "3.5.0-jdk-8"))
if (!sonarQube.waitForQualityGateWebhookToBeCalled()) {
unstable("Pipeline unstable due to SonarCloud quality gate failure")
}
Just like for ordinary SonarQube, you have to setup a webhook in SonarCloud for waitForQualityGateWebhookToBeCalled()
to work (see above).
If you want SonarCloud to decorate your Pull Requests, you will have to
- GitHub: Install the SonarCloud Application for GitHub into your GitHub organization or account.
- BitBucket: Install the SonarCloud add-on for Bitbucket Cloud into your BitBucket team or account.
Note that ces-build-lib supports only Git repos for now. No mercurial/hg, sorry.
See also Pull Request analysis.
Note that SonarCloud uses the Branch Plugin, so the first analysis has to be done differently, as described in Branches.
As described above, SonarCloud can annotate PullRequests using the SonarCloud Application for GitHub. It is no longer possible to do this from a regular community edition SonarQube, as the GitHub Plugin for SonarQube is deprecated.
So a PR build is treated just like any other. That is,
- without branch plugin: A new project using the
BRANCH_NAME
from env is created. - with Branch Plugin: A new branch is analysed using the
BRANCH_NAME
from env.
The Jenkins GitHub Plugin sets BRANCH_NAME
to the PR Name, e.g. PR-42
.
Provides the functionality to read changes of a specific version in a changelog that is based on the changelog format on https://keepachangelog.com/.
Note: The changelog will automatically be formatted. Characters like "
, '
, \
will be removed.
A \n
will be replaced with \\n
. This is done to make it possible to pass this string to a json
struct as a value.
Example:
Changelog changelog = new Changelog(this)
stage('Changelog') {
String changes = changelog.getChangesForVersion('v1.0.0')
// ...
}
You can optionally pass the path to the changelog file if it is located somewhere else than in the root path or
if the file name is not CHANGELOG.md
.
Example:
Changelog changelog = new Changelog(this, 'myNewChangelog.md')
stage('Changelog') {
String changes = changelog.getChangesForVersion('v1.0.0')
// ...
}
Provides the functionality to do changes on a github repository such as creating a new release.
Example:
Git git = new Git(this)
GitHub github = new GitHub(this, git)
stage('Github') {
github.createRelease('v1.1.1', 'Changes for version v1.1.1')
}
github.createRelease(releaseVersion, changes [, productionBranch])
- Creates a release on github. Returns the GitHub Release-ID.- Use the
releaseVersion
(String) as name and tag. - Use the
changes
(String) as body of the release. - Optionally, use
productionBranch
(String) as the name of the production release branch. This defaults tomaster
.
- Use the
github.createReleaseWithChangelog(releaseVersion, changelog [, productionBranch])
- Creates a release on github. Returns the GitHub Release-ID.- Use the
releaseVersion
(String) as name and tag. - Use the
changelog
(Changelog) to extract the changes out of a changelog and add them to the body of the release. - Optionally, use
productionBranch
(String) as the name of the production release branch. This defaults tomaster
.
- Use the
github.addReleaseAsset(releaseId, filePath)
- The
releaseId
(String) is the unique identifier of a release in the github API. Can be obtained as return value ofcreateReleaseWithChangelog
orcreateRelease
. - The
filePath
specifies the path to the file which should be uploaded.
- The
pushPagesBranch('folderToPush', 'commit Message')
- Commits and pushes a folder to thegh-pages
branch of the current repo. Can be used to conveniently deliver websites. See https://pages.github.com. Note:- Uses the name and email of the last committer as author and committer.
- the
gh-pages
branch is temporarily checked out to the.gh-pages
folder. - Don't forget to create a git object with credentials.
- Optional: You can deploy to a sub folder of your GitHub Pages branch using a third parameter
- Examples:
- See also Cloudogu Blog: Continuous Delivery with reveal.js
A wrapper class around the Git class to simplify the use of the git flow branching model.
Example:
Git git = new Git(this)
git.committerName = 'jenkins'
git.committerEmail = '[email protected]'
GitFlow gitflow = new GitFlow(this, git)
stage('Gitflow') {
if (gitflow.isReleaseBranch()){
gitflow.finishRelease(git.getSimpleBranchName())
}
}
gitflow.isReleaseBranch()
- Checks if the currently checked out branch is a gitflow release branch.gitflow.finishRelease(releaseVersion [, productionBranch])
- Finishes a git release by merging into develop and production release branch (default: "master").- Use the
releaseVersion
(String) as the name of the new git release. - Optionally, use
productionBranch
(String) as the name of the production release branch. This defaults tomaster
.
- Use the
Provides the functionality to handle pull requests on a SCMManager repository.
You need to pass usernamePassword
(i.e. a String containing the ID that refers to the
Jenkins credentials) to SCMManager
during construction.
These are then used for handling the pull requests.
SCMManager scmm = new SCMManager(this, 'ourCredentials')
Set the repository url through the repositoryUrl
property like so:
SCMManager scmm = new SCMManager(this, 'https://hostname/scm', 'ourCredentials')
Each method requires a repository
parameter, a String containing namespace and name, e.g. cloudogu/ces-build-lib
.
scmm.searchPullRequestIdByTitle(repository, title)
- Returns a pull request ID by title, or empty, if not present.- Use the
repository
(String) as the GitOps repository - Use the
title
(String) as the title of the pull request in question. - This methods requires the
readJSON()
step from the Pipeline Utility Steps plugin.
- Use the
scmm.createPullRequest(repository, source, target, title, description)
- Creates a pull request, or empty, if not present.- Use the
repository
(String) as the GitOps repository - Use the
source
(String) as the source branch of the pull request. - Use the
target
(String) as the target branch of the pull request. - Use the
title
(String) as the title of the pull request. - Use the
description
(String) as the description of the pull request.
- Use the
scmm.updatePullRequest(repository, pullRequestId, title, description)
- Updates the pull request.- Use the
repository
(String) as the GitOps repository - Use the
pullRequestId
(String) as the ID of the pull request. - Use the
title
(String) as the title of the pull request. - Use the
description
(String) as the description of the pull request.
- Use the
scmm.createOrUpdatePullRequest(repository, source, target, title, description)
- Creates a pull request if no PR is found or updates the existing one.- Use the
repository
(String) as the GitOps repository - Use the
source
(String) as the source branch of the pull request. - Use the
target
(String) as the target branch of the pull request. - Use the
title
(String) as the title of the pull request. - Use the
description
(String) as the description of the pull request.
- Use the
scmm.addComment(repository, pullRequestId, comment)
- Adds a comment to a pull request.- Use the
repository
(String) as the GitOps repository - Use the
pullRequestId
(String) as the ID of the pull request. - Use the
comment
(String) as the comment to add to the pull request.
- Use the
Example:
def scmm = new SCMManager(this, 'https://your.ecosystem.com/scm', scmManagerCredentials)
def pullRequestId = scmm.createPullRequest('cloudogu/ces-build-lib', 'feature/abc', 'develop', 'My title', 'My description')
pullRequestId = scmm.searchPullRequestIdByTitle('cloudogu/ces-build-lib', 'My title')
scmm.updatePullRequest('cloudogu/ces-build-lib', pullRequestId, 'My new title', 'My new description')
scmm.addComment('cloudogu/ces-build-lib', pullRequestId, 'A comment')
HttpClient
provides a simple curl
frontend for groovy.
- Not surprisingly, it requires
curl
on the jenkins agents. - If you need to authenticate, you can create a
HttpClient
with optional credentials ID (usernamePassword
credentials) HttpClient
providesget()
,put()
andpost()
methods- All methods have the same signature, e.g.
http.get(url, contentType = '', data = '')
url
(String)- optional
contentType
(String) - set as acceptHeader in the request - optional
data
(Object) - sent in the body of the request
- If successful, all methods return the same data structure a map of
httpCode
- as string containing the http status codeheaders
- a map containing the response headers, e.g.[ location: 'https://url' ]
body
- an optional string containing the body of the response
- In case of an error (Connection refused, Could not resolve host, etc.) an exception is thrown which fails the build
right away. If you don't want the build to fail, wrap the call in a
try
/catch
block.
Example:
HttpClient http = new HttpClient(scriptMock, 'myCredentialID')
// Simplest example
echo http.get('https://url')
// POSTing data
def dataJson = JsonOutput.toJson([
comment: comment
])
def response = http.post('https://url/comments"', 'application/json', dataJson)
if (response.status == '201' && response.content-type == 'application/json') {
def json = readJSON text: response.body
echo json.count
}
K3d
provides functions to set up and administer a local k3s cluster in Docker.
Example:
K3d k3d = new K3d(this, env.WORKSPACE, env.PATH)
try {
stage('Set up k3d cluster') {
k3d.startK3d()
}
stage('Do something with your cluster') {
k3d.kubectl("get nodes")
}
stage('Apply your Helm chart') {
k3d.helm("install path/to/your/chart")
}
stage('build and push development artefact') {
String myCurrentArtefactVersion = "yourTag-1.2.3-dev"
imageName = k3d.buildAndPushToLocalRegistry("your/image", myCurrentArtefactVersion)
// your image name may look like this: k3d-citest-123456/your/image:yourTag-1.2.3-dev
// the image name can be applied to your cluster as usual, f. i. with k3d.kubectl() with a customized K8s resource
}
stage('configure components'){
// add additional components
k3d.configureComponents(["k8s-minio" : ["version": "latest", "helmRepositoryNamespace": "k8s"],
"k8s-loki" : ["version": "latest", "helmRepositoryNamespace": "k8s"],
"k8s-promtail" : ["version": "latest", "helmRepositoryNamespace": "k8s"],
"k8s-blueprint-operator": null, // null values will delete components from the config
])
}
stage('execute k8s-ces-setup') {
k3d.setup('0.20.0')
}
stage('install resources and wait for them') {
imageName = "registry.cloudogu.com/official/my-dogu-name:1.0.0"
k3d.installDogu("my-dogu-name", imageName, myDoguResourceYamlFile)
k3d.waitForDeploymentRollout("my-dogu-name", 300, 5)
}
stage('install a dependent dogu by applying a dogu resource') {
k3d.applyDoguResource("my-dependency", "nyNamespace", "10.0.0-1")
k3d.waitForDeploymentRollout("my-dependency", 300, 5)
}
} catch (Exception ignored) {
// in case of a failed build collect dogus, resources and pod logs and archive them as log file on the build.
k3d.collectAndArchiveLogs()
throw e
} finally {
stage('Remove k3d cluster') {
k3d.deleteK3d()
}
}
DoguRegistry
provides functions to easily push dogus and k8s components to a configured registry.
Example:
DoguRegistry registry = new DoguRegistry(this)
// push dogu
registry.pushDogu()
// push k8s component
registry.pushK8sYaml("pathToMyK8sYaml.yaml", "k8s-dogu-operator", "mynamespace", "0.9.0")
Bats
provides functions to easily execute existing bats tests for a project.
Example:
Docker docker = new Docker(this)
stage('Bats Tests') {
Bats bats = new Bats(this, docker)
bats.checkAndExecuteTests()
}
Makefile
provides function regarding the Makefile
from the current directory.
Example:
Makefile makefile = new Makefile(this)
String currentVersion = makefile.getVersion()
Markdown
provides function regarding the Markdown Files
from the projects docs directory
Markdown markdown = new Markdown(this)
markdown.check()
markdown.check
executes the function defined in Markdown
running a container with the latest https://github.com/tcort/markdown-link-check image
and verifies that the links in the defined project directory are alive
Additionally, the markdown link checker can be used with a specific version (default: stable).
Markdown markdown = new Markdown(this, "3.11.0")
markdown.check()
Use Dockerfile.lint() instead of lintDockerfile()! See Dockerfile
lintDockerfile() // uses Dockerfile as default; optional parameter
See lintDockerFile
shellCheck() // search for all .sh files in folder and runs shellcheck
shellCheck(fileList) // fileList="a.sh b.sh" execute shellcheck on a custom list
See shellCheck
Provides the functionality of the Jenkins Post-build Action "E-mail Notification" known from freestyle projects.
catchError {
// Stages and steps
}
mailIfStatusChanged('[email protected],[email protected]')
Returns true
if the current build is a pull request (when the CHANGE_ID
environment variable is set)
Tested with GitHub.
stage('SomethingToSkipWhenInPR') {
if (!isPullRequest()) {
// ...
}
}
Determines the email recipients: For branches that are considered unstable (all except for 'master' and 'develop') only the Git author is returned (if present). Otherwise, the default recipients (passed as parameter) and git author are returned.
catchError {
// Stages and steps
}
mailIfStatusChanged(findEmailRecipients('[email protected],[email protected]'))
The example writes state changes email to '[email protected],[email protected]' + git author for stable branches and only to git author for unstable branches.
Returns the hostname of the current Jenkins instance.
For example, if running on http(s):https://server:port/jenkins
, server
is returned.
Returns true if the build is successful, i.e. not failed or unstable (yet).
Returns a list of vulnerabilities or an empty list if there are no vulnerabilities for the given severity.
findVulnerabilitiesWithTrivy(trivyConfig as Map)
trivyConfig = [
imageName: 'alpine:3.17.2',
severity: [ 'HIGH, CRITICAL' ],
trivyVersion: '0.41.0',
additionalFlags: '--ignore-unfixed'
]
Here the only mandatory field is imageName
. If no imageName was passed the function returns an empty list.
- imageName (string): The name of the image to be scanned
- severity (list of strings): If left blank all severities will be shown. If one or more are specified only these will be shown i.e. if 'HIGH' is passed then only vulnerabilities with the 'HIGH' score are shown
- trivyVersion (string): The version of the trivy image
- additionalFlags (string): Additional flags for trivy, e.g.
--ignore-unfixed
node {
stage('Scan Vulns') {
def vulns = findVulnerabilitiesWithTrivy(imageName: 'alpine:3.17.2')
if (vulns.size() > 0) {
archiveArtifacts artifacts: '.trivy/trivyOutput.json'
unstable "Found ${vulns.size()} vulnerabilities in image. See vulns.json"
}
}
}
If you want to ignore / allow certain vulnerabilities please use a .trivyignore file Provide the file in your repo / directory where you run your job e.g.:
.gitignore
Jenkinsfile
.trivyignore
# Accept the risk
CVE-2018-14618
# Accept the risk until 2023-01-01
CVE-2019-14697 exp:2023-01-01
# No impact in our settings
CVE-2019-1543
# Ignore misconfigurations
AVD-DS-0002
# Ignore secrets
generic-unwanted-rule
aws-account-id
If there are vulnerabilities the output looks as follows.
{
"SchemaVersion": 2,
"ArtifactName": "alpine:3.17.2",
"ArtifactType": "container_image",
"Metadata": {
"OS": {
"Family": "alpine",
"Name": "3.17.2"
},
"ImageID": "sha256:b2aa39c304c27b96c1fef0c06bee651ac9241d49c4fe34381cab8453f9a89c7d",
"DiffIDs": [
"sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
],
"RepoTags": [
"alpine:3.17.2"
],
"RepoDigests": [
"alpine@sha256:ff6bdca1701f3a8a67e328815ff2346b0e4067d32ec36b7992c1fdc001dc8517"
],
"ImageConfig": {
"architecture": "amd64",
"container": "4ad3f57821a165b2174de22a9710123f0d35e5884dca772295c6ebe85f74fe57",
"created": "2023-02-11T04:46:42.558343068Z",
"docker_version": "20.10.12",
"history": [
{
"created": "2023-02-11T04:46:42.449083344Z",
"created_by": "/bin/sh -c #(nop) ADD file:40887ab7c06977737e63c215c9bd297c0c74de8d12d16ebdf1c3d40ac392f62d in / "
},
{
"created": "2023-02-11T04:46:42.558343068Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
]
},
"config": {
"Cmd": [
"/bin/sh"
],
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Image": "sha256:ba2beca50019d79fb31b12c08f3786c5a0621017a3e95a72f2f8b832f894a427"
}
}
},
"Results": [
{
"Target": "alpine:3.17.2 (alpine 3.17.2)",
"Class": "os-pkgs",
"Type": "alpine",
"Vulnerabilities": [
{
"VulnerabilityID": "CVE-2023-0464",
"PkgID": "[email protected]",
"PkgName": "libcrypto3",
"InstalledVersion": "3.0.8-r0",
"FixedVersion": "3.0.8-r1",
"Layer": {
"DiffID": "sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39"
},
"SeveritySource": "nvd",
"PrimaryURL": "https://avd.aquasec.com/nvd/cve-2023-0464",
"DataSource": {
"ID": "alpine",
"Name": "Alpine Secdb",
"URL": "https://secdb.alpinelinux.org/"
},
"Title": "Denial of service by excessive resource usage in verifying X509 policy constraints",
"Description": "A security vulnerability has been identified in all supported versions of OpenSSL related to the verification of X.509 certificate chains that include policy constraints. Attackers may be able to exploit this vulnerability by creating a malicious certificate chain that triggers exponential use of computational resources, leading to a denial-of-service (DoS) attack on affected systems. Policy processing is disabled by default but can be enabled by passing the `-policy' argument to the command line utilities or by calling the `X509_VERIFY_PARAM_set1_policies()' function.",
"Severity": "HIGH",
"CweIDs": [
"CWE-295"
],
"CVSS": {
"nvd": {
"V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"V3Score": 7.5
},
"redhat": {
"V3Vector": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
"V3Score": 5.9
}
},
"References": [
"https://access.redhat.com/security/cve/CVE-2023-0464",
"https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-0464",
"https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2017771e2db3e2b96f89bbe8766c3209f6a99545",
"https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2dcd4f1e3115f38cefa43e3efbe9b801c27e642e",
"https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=879f7080d7e141f415c79eaa3a8ac4a3dad0348b",
"https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=959c59c7a0164117e7f8366466a32bb1f8d77ff1",
"https://nvd.nist.gov/vuln/detail/CVE-2023-0464",
"https://ubuntu.com/security/notices/USN-6039-1",
"https://www.cve.org/CVERecord?id=CVE-2023-0464",
"https://www.openssl.org/news/secadv/20230322.txt"
],
"PublishedDate": "2023-03-22T17:15:00Z",
"LastModifiedDate": "2023-03-29T19:37:00Z"
}
]
}
]
}
- This library is built using itself! See Jenkinsfile
- cloudugo/cas
- cloudogu/command-bus
- cloudogu/continuous-delivery-slides-example
The Cloudogu EcoSystem is an open platform, which lets you choose how and where your team creates great software. Each service or tool is delivered as a Dogu, a Docker container. Each Dogu can easily be integrated in your environment just by pulling it from our registry.
We have a growing number of ready-to-use Dogus, e.g. SCM-Manager, Jenkins, Nexus Repository, SonarQube, Redmine and many more. Every Dogu can be tailored to your specific needs. Take advantage of a central authentication service, a dynamic navigation, that lets you easily switch between the web UIs and a smart configuration magic, which automatically detects and responds to dependencies between Dogus.
The Cloudogu EcoSystem is open source and it runs either on-premises or in the cloud. The Cloudogu EcoSystem is developed by Cloudogu GmbH under AGPL-3.0-only.
Copyright © 2020 - present Cloudogu GmbH This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/. See LICENSE for details.
MADE WITH ❤️ FOR DEV ADDICTS. Legal notice / Imprint