Blogpost

11.02.2020

EcoSystem

Docs As Code – Continuous Delivery of Presentations with reveal.js and Jenkins – Part 2

The first part of this series demonstrated the use cases and benefits of delivering presentations with reveal.js. They are Docs As Code, and therefore they can be subjected to versioning management and of course delivered via Continuous Delivery. Furthermore, we demonstrate how the Jenkins pipelines can be used to deploy to GitHub Pages using a model concrete implementation. This article demonstrates additional alternatives for deployment (Sonatype Nexus and Kubernetes), while the general structure of the `Jenkinsfile` remains the same.

The first part<a target="_self" title= href=""></a> of this series demonstrated the use cases and benefits of delivering presentations with reveal.js. They are Docs As Code, and therefore they can be subjected to versioning management and of course delivered via Continuous Delivery. Furthermore, we demonstrate how the Jenkins pipelines can be used to deploy to GitHub Pages using a model concrete implementation. This article demonstrates additional alternatives for deployment (Sonatype Nexus and Kubernetes), while the general structure of the Jenkinsfile remains the same.

Deployment to Nexus

Maven mvn = new MavenInDocker(this, "3.5.0-jdk-8")
String versionName = createVersion(mvn)

stage('Deploy Nexus') {
    mvn.useDeploymentRepository([
            id: 'ecosystem.cloudogu.com',
            CredentialsId: 'ces-nexus'
    ])

    mvn.deploySiteToNexus("-Dartifact=${env.BRANCH_NAME} ")
}

In non-public contexts (such as in-house company presentations), public deployment to GitHub is not an option. If a Nexus repository is already available in-house, the Maven Site<a rel="noreferrer" target="_blank" title= href=""></a> mechanism can be used to upload the presentation there.

To do this, you need:

  • A Nexus repository in raw format (Cloudogu-Docs in the example),
  • A pom.xml, which essentially configures where the site is deployed to and
  • A user account that has write access to the Nexus repository.

Generally speaking, a Maven site is deployed using mvn site: deploy. The user account and password are defined in settings.xml. Experience has shown that the latter is cumbersome to implement on CI servers. Again, the implementation details are left to ‘ces-build-lib’, and the step Maven.deploySiteToNexus() is simply called. Again, Docker is used to deploy Maven using the MavenInDocker class from ‘ces-build-lib’.

For this to work, you must first transfer the following via Maven.deploySiteToNexus():

  • The ID of the repository (which refers to the definition in pom.xml; see below) and
  • the id of the Username with password credentials that are maintained in Jenkins. This is ces-nexus in the example. CredentialsId: 'ces-nexus'. These belong to a user in Nexus that is authorized by a role to write to the repository. In Nexus 3, the role actually needs the rights nx-repository-view-raw-<RepoName>-add and -edit (e.g., nx-repository-view-raw-Cloudogu-Docs-add).

The following are the essential points in pom.xml (see GitHub for the complete example):

<groupId>com.cloudogu.slides</groupId>
<artifactId>${artifact}</artifactId>
<version>${revision}</version>
<packaging>pom</packaging>

<url>https://ecosystem.cloudogu.com/nexus/repository/Cloudogu-Docs/${project.groupId}/${project.artifactId}/${project.version}/</url>

<distributionManagement>
    <site>
        <id>ecosystem.cloudogu.com</id>
        <name>site repository ecosystem.cloudogu.com</name>
        <url>dav:https://ecosystem.cloudogu.com/nexus/repository/Cloudogu-Docs/${project.groupId}/${project.artifactId}/${project.version}/</url>
    </site>
</distributionManagement>

<properties>
    <revision>-SNAPSHOT</revision>
    <artifact>template</artifact>
</properties>

The Maven coordinates (groupId, artifactId and version) are used here to define the URL of the presentation in the Nexus repository. For example, the URL of a version deployed from the master branch looks like this: https://ecosystem.cloudogu.com/nexus/repository/Cloudogu-Docs/com.cloudogu.slides/master/201904291351-dd1df3d7/

The Maven feature "CI Friendly Versions" is used to set the coordinates dynamically during the build using system properties (e.g., using -Dartifact = abc in the command line):

  • artifactId is used to represent the name of the current branch in the URL (thereby giving each branch its own URL) and
  • revision determines the version that is recalculated in each build. The value is already transmitted in createVersion() (see the first part of the article series): mvn.additionalArgs = "-Drevision=${versionName} "

Deployment to Kubernetes

Another alternative is deployment as a container to Kubernetes. Anyone who has already configured a cluster can easily deploy the presentation as an additional application. For everyone else, this example can serve as the first, simple example application for continuous delivery with Jenkins and Kubernetes.

Since the build is already defined in the Jenkinsfile, the Dockerfile (the build plan for the Docker image) is manageable:

FROM bitnami/nginx:1.14.2
COPY . /app/

This is where Nginx can be deployed as a tried and tested web server (which serves almost half of all Internet traffic). However, the official image is not used as the basic image here. Rather, we want to use the bitnami one, since this one is specialized in IT security. Unlike the official image, bitnami is not run as the root user and is therefore not vulnerable to the first serious vulnerability that has been discovered in Kubernetes (CVE-2019-5736), for example.

In the Dockerfile it is important to note that COPY./ would copy the entire workspace into the image, which would then be provided by Nginx at runtime. Thus, for example, the Jenkinsfile and the k8s.yaml would be made available for download, which is a security risk! Therefore, a .dockerignore file is additionally maintained:

**
!index.html
#...
  • Start with ** (ignore everything)
  • then release the desired files and folders using negation.
  • This is effectively a whitelisting process.

In order for the image on Kubernetes to be able to start a container that can be accessed from the outside, the following is also required:

  • A deployment (a template for pods in which containers are executed),
  • A service (fixed endpoint (IP address/DNS name) for potentially short-lived pods) and
  • An ingress (mapping of host name to service for incoming requests) – This, of course, requires a configured ingress controller (such as Træfik).

These are all defined in the file k8s.yaml. The interesting thing is that a placeholder is used as the image name in the deployment: image: $IMAGE_NAME This is used in the pipeline to enter the current version of the image.

String versionName = createVersion(mvn)
//...
stage('Deploy Kubernetes') {
    deployToKubernetes(versionName)
}

// ...
void deployToKubernetes(String versionName) {

    String imageName = "cloudogu/continuous-delivery-slides-example:${versionName}"
    def image = docker.build imageName
    docker.withRegistry('', 'hub.docker.com-cesmarvin') {
        image.push()
        image.push('latest')
    }

    withCredentials([file(CredentialsId: 'kubeconfig-oss-deployer', variable: 'kubeconfig')]) {

        withEnv(["IMAGE_NAME=$imageName"]) {

            kubernetesDeploy(
                    CredentialsType: 'KubeConfig',
                    kubeConfig: [path: kubeconfig],

                    configs: 'k8s.yaml',
                    enableConfigSubstitution: true
            )
        }
    }
}

In the Jenkinsfile, the image is first built using Jenkins tools and then pushed to a registry (in the example, the DockerHub). The final image can be viewed here. The current version name is set both as a Docker tag and latest. The second one is not mandatory, but it is good practice in Docker registries. The necessary user account is created in Jenkins as Username with password -Credentials, and its ID is passed (in the example it is hub.docker.com-cesmarvin) to docker.withRegistry()-Step. The user account requires write permission for the image in the Docker registry (in the example cloudogu/continuous-delivery-slides-example).

Now the image name has to be entered in the Kubernetes deployment and passed to the cluster. Both of these steps are carried out by kubernetDeploy () -Step, which is provided by the Kubernetes Continuous Deploy plugin. enableConfigSubstitution is used to determine that all entries with the $VARIABLE syntax in the YAML-files are replaced by corresponding environment variables from the Jenkins pipeline (in the example IMAGE_NAME). Even with the cluster you have to authenticate yourself. Here, the kubeconfig file that is known from the CLI tool kubectl is used, which is stored as Secret file-Credentials in Jenkins (in the example it is stored with the ID kubeconfig-oss-deployer).

More details about how to create the credentials for the Container registry and Kubernetes in Jenkins can be found in the 4th part of our series of articles about Jenkins Pipelines.

For example, by using just a few lines of Jenkinsfile code, we have been able to automate supposedly complicated topics such as Docker image creation and Kubernetes deployment.

Conclusion

To conclude this series of articles, in this part we showed more examples of target deployments of reveal.js presentations using the Jenkins Continuous Delivery Pipeline. Specifically, this part describes how to deploy to Sonatype Nexus and Kubernetes.

If you look beyond its immediate benefits as a browser-based presentation solution, you will see how this solution allows you to achieve Continuous Delivery for web applications. The article offers a selection of options that you can also use in productive systems depending on the application case and available infrastructure.

  • If you simply want to deploy a static website that is only available in-house in an enterprise environment, you can easily use Nexus here.
  • If static content can or should be made public, deployment to GitHub Pages is the way to go.
  • Kubernetes offers the greatest flexibility. It can be used to host either internal or external static or dynamic content. However, the operation of the cluster is more complex. Anyone wanting to know more information about this solution can find it in our training area.

Tags