Blogpost

21.04.2023

DevOps

Coding Continuous Delivery: CIOps vs. GitOps with Jenkins

Continuous delivery (CD) is an agile software development approach that has proven to be a suitable way to reliably and repeatably produce high-quality software in short cycles. The use of containers and the cloud, e.g., on platforms such as Kubernetes (K8s), offers many opportunities to make CD processes more robust and simpler. One such option is GitOps. This article provides some concrete examples to illustrate the differences between classic CD pipelines (CIOps) and GitOps processes.

CD automation is done using pipelines on continuous integration (CI) servers, such as Jenkins. There are two use cases where the use of containers provides advantages:

  1. When executing the pipeline, tools can be executed in containers without the need for any further CI server configuration. Applications can also be run in isolation in containers for testing to avoid port conflicts, for instance.
  2. Container images are a standardized artifact in which the application is packaged through the pipeline.

These images can be deployed in many different operating environments, since the Open Container Initiative (OCI) has standardized both Docker containers and images as well as the registry API. In recent years, container orchestration platforms have proven to be a flexible tool for deploying OCI images, especially in the DevOps environment. K8s has emerged as a de facto standard, which is why this article focuses on the example of deployment to K8s.

Classic CD pipelines – CIOps

In the case of a “classic” CD pipeline, the CI server actively performs the deployment to the operating environment (see Figure 1). In order to distinguish it from more recently developed methods, such as GitOps (see below), this procedure is also referred to as “CIOps.” Sometimes it is represented as an [anti-pattern](https://www.weave.works/blog/kubernetes-anti-patterns-let-s-do-gitops-not-ciops "Kubernetes anti-pattern, Weaveworks blog). However, the procedure has proven its worth in practice for many years, and, in general there’s nothing wrong with it.

Classic CD pipelines (CIOps) Figure 1: Classic CD pipelines (CIOps)

An easy-to-implement logic for automating deployment in a CIOps pipeline with staging and production environments is the use of branches in Git. In adopting this approach, many teams use feature branches or Git Flow, where the integrated development level flows together in the develop branch, and the main (or master) branch contains the productive versions. It can be used as a basis for a CD strategy: every push to develop leads to a deployment in the staging environment, every push to main goes into production. Thus, the last integrated version in staging is always available for functional or manual testing. A pull request (PR) or merge to main then initiates the deployment to production. Furthermore, it is possible to have a deployment per feature branch.

The disadvantage of this approach is that each deployment requires a build in the CI server. That slows down the process because the same artifact is supposed to be deployed to all the stages. So no new build, test, or even version name would be necessary. This deployment logic is easy to implement with Jenkins Pipeline, because the branch name can be queried from the environment in multibranch builds. A detailed example showing a full implementation with Jenkins is described in in this post; the full Jenkinsfile can be viewed on GitHub.

GitOps vs. CIOps

There is now an alternative to CIOps in the K8s environment: GitOps. Here, a cloud-native application running in the K8s cluster (the “GitOps operator”) continuously compares the actual state of the cluster with the desired state described in a Git repository. Deployments are triggered by a push to this repository, such as by accepting a PR (see Figure 2). There are a few advantages to GitOps:

  • Less write access to the cluster from outside because the GitOps operator performs deployments from within the cluster
  • No credentials in the CI server because access to the cluster is not necessary
  • Infrastructure as Code (IaC) offers advantages for auditing and reproducibility. Furthermore, the cluster and Git are automatically synchronized.
  • From an organizational point of view, it is often easier to access Git than the API server. There may be no need to open ports in the firewall

Simple deployment using GitOps Figure 2: Simple deployment using GitOps

Role of the CI server in GitOps

A CI server is no longer necessary to deploy third-party applications (not developed in-house). Applications written in-house still must be built, tested, etc. This is still done using a CI server, just like pushing the image to a registry (see Figure 3). Moreover, the CI server can be used to solve some of the challenges of GitOps:

  • Local development with GitOps is less efficient (since operator operations, deployment, and debugging are more cumbersome).
  • It can be cumbersome to manually implement the staging (a PR must be created for each stage).
  • GitOps often provides a central repository for storing infrastructure code. The advantage of this is that the entire state of the cluster is stored in one place. The downside: Separating application and infrastructure code into two repositories is more complex to maintain when it comes to reviewing, versioning, and local development, for example.

GitOps deployment of in-house developed images Figure 3: GitOps deployment of in-house developed images

It is possible to use the CI server to keep both in the application’s repository (hereinafter referred to as the app repository). The CI server cab be used to push the infrastructure code to the GitOps repository (see Figure 4).

GitOps as an example

The implementation of a GitOps flow, as shown in Figure 4, may seem very easy to implement at first glance. But, as is so often the case, the devil is in the details: On the one hand, implementation challenges need to be addressed, and, on the other hand, other points quickly crop up that can be automated by the pipeline. In the end, the initially simple implementation of such a pipeline can quickly become costly.

Deployment with app repository and GitOps repository Figure 4: Deployment with app repository and GitOps repository

The biggest challenge is that multiple builds that write to the same GitOps repository can run concurrently. Reliably handling such concurrency issues is a surprisingly complex task. Once the pipeline is basically functioning, further automation can make the development process more efficient. Examples of such extensions will follow later.

Concrete examples of GitOps flows are provided by the GitOps Playground, which allows you to try out various GitOps operators, such as Flux (GitOps Toolkit) and ArgoCD (GitOps Engine), in a locally executable cluster in conjunction with Jenkins.

It also includes a Jenkins Pipeline (Before extracting to library; after extracting to library) that is similar to the CIOps example that was already mentioned. However, the pipelines differ fundamentally from each other in two respects:

  • The YAMLs are pushed to the GitOps repository instead of being applied to the cluster.
  • The different stages are implemented completely in the GitOps repository (not in the app repository). As a result, a CI server is no longer necessary here. The Go Live is faster.

Figure 5 shows the basic folder structure of the GitOps repository: At the top level, there is one folder per stage. Each of which contains one folder per application. The deployment then differs slightly depending on how the stages are solved:

  • Staging namespaces in the same cluster (see GitOps Playground): There is only one GitOps operator that deploys K8s resources contained in the GitOps repository to the cluster. However, it must be ensured that the correct namespace is specified in the K8s resources. This is how the GitOps Playground solves this.
  • Alternatively, staging clusters can be used. The GitOps operator is configured in these clusters to deploy everything from the respective folder according to its stage. Example: The GitOps operator in the staging cluster deploys only K8s resources from the staging folder.

Possible folder structure of a GitOps repository Figure 5: Possible folder structure of a GitOps repository

The flow of the mentioned pipeline from the GitOps Playground is as follows:

  1. A push to the main branch of the app repository triggers the GitOps process
  2. Clone the GitOps repository
  3. Staging: Update the image version in the deployment YAML, copy to the application folder of the stage, and push it to the main branch of the GitOps repository
  4. Production: Like in step 3, except here the changes are made in the “Production” folder and are pushed to a branch created specifically for the application in the GitOps repository. Finally, a PR to the main branch is opened.

After this pipeline has been completed, the application is deployed to staging by the GitOps operator for review. Moreover, there is a PR that leads to direct deployment in production (without a CI server) when accepted.

The concurrency issues described above can occur between cloning the repository and the pushes: If the remote repository has been changed in the meantime, the push fails, causing the build to fail. This complicates and slows development, so the pipeline in the example has a simple retry mechanism. If the push fails, there is a pull and a new push. This solution is not perfect, because the build still fails in the event of conflicts. In certain circumstances, an inconsistency may even occur: the pull could make a fast-forward merge that combines the changes from the build with those from another one. It would, therefore, be safer to not push after the pull, but to reset to the remote version and make the changes again. When it comes to other points, the pipeline is already quite sophisticated. For example, the commits the job made to the GitOps repository are displayed in the Jenkins job description for more transparency. To make reviewing the PR more efficient, the following is written in the commit message in the GitOps repository (Figure 6 shows this with SCM-Manager as an example):

  • Author of the original commit in the app repository
  • The author is retained, but “Jenkins” becomes the committer. This makes it clear who originated this change, but also that the commit was created automatically.
  • Link to the issue ID, parsed from the original commit message. This provides a direct link to issues in the issue tracker.
  • Link to the original commit in the app repository. This allows you to switch to the application’s source code at the click of a button.
  • A staging commit is marked accordingly (not shown in the illustration).

Example of a commit created by the CI server in the GitOps repository Figure 6: Example of a commit created by the CI server in the GitOps repository

There are other features currently being developed in the GitOps Playground that make the process more efficient. They may already be available at the time this article is published:

  • Fail Early: static YAML analysis by the CI server. To avoid the complex task of troubleshooting in the GitOps operator log, the YAML files can be checked for syntactic correctness using the yamlint tool, for example. Another step is to check the K8s resources against the K8s schema. This can be done using the kubeval tool. If the Helm charts have their own schema, then the latter can also be tested (using helm lint).
  • Automatically created PRs can be enriched with more information. For example, an existing PR can be supplemented if further commits are made to it. Moreover, it is possible to add a link to the Jenkins job in the comments.
  • One way to bring configuration files or scripts into the cluster is to package them as inline YAML, such as in a ConfigMap. The disadvantage to this method is that there is no syntax highlighting or linting in this form. This frequently leads to avoidable errors or inefficient “copying back and forth.” Automation can correct this disadvantage: The CI server handles the packaging of a “real” file in YAML. This provides the opportunity to work on this file in development and do the usual highlighting and linting there.
  • Often, before a feature is fully developed, it is necessary to manually test the feature in the staging environment. The pipeline can be used to perform this testing without (premature) merging into the main branch and without a PR for production. This can be implemented using build parameters in Jenkins. Such a parameter can be set when manually initiating a build. The pipeline can respond to the parameter by pushing into staging but not creating a PR for production.
  • A larger number of stages can be implemented using additional branches and PRs.

Templating tools

The presented CIOps and GitOps examples show how simple K8s resources can be applied to the cluster using the respective method. The disadvantage of this is that the K8s resources must be stored completely redundantly for each stage. Therefore, in practice, templating tools are often used to allow a single source (without redundancy) to be configured. Helm, the official package manager for K8s, is a common solution. Helm can be used to do more than deploy third-party packages. Its templating function can also be used for local development.

There are some alternatives to Helm for local development, such as the “template-free” tool Kustomize, which works with overlays that are applied to a base file using the patching mechanism.

CIOps makes it relatively easy to use templating tools. The tools are available through a command-line interface that can be called in the pipeline. An example of this is available on GitHub. Here, the Helm binary is executed as a container, which means that there is no need to configure the Jenkins controller. Some other valuable practical findings:

  • Using helm upgrade --install eliminates the need to make a complex distinction between the initial installation and upgrades.
  • by convention, the values.yaml contained in all Helm packages (which are called charts), describes default values; another values file per stage sets the respective specific values. The –-values parameter must be used to pass this file to the Helm command; the default values.yaml is always used implicitly.
  • It is easy to configure the name and version of the image using parameters, such as --set 'image.tag=...

When you start using templating tools with GitOps, you face one daunting challenge: How can the imperative call (for example, helm upgrade) be put into a declarative form that can be placed in the GitOps repository? The solution: by using additional operators in K8s. For the widely used tools Helm and Kustomize, such operators already exist, but this is not necessarily the case for other templating tools. Here, too, there is a practical example that you can find in the GitOps Playground (Before extracting to library; after extracting to library). It delivers a static HTML page using the NGINX web server. This example would also work without a pipeline, but with the disadvantages mentioned above:

  • The HTML file would have to be maintained inline in a YAML file.
  • A Helm operator would be necessary for local development.
  • The values.yaml files would have to be described in fully redundant HelmRelease YAMLs for each stage.

In this respect, the use of a pipeline is also advantageous for the GitOps deployment of third-party applications. Looking at the two Jenkinsfiles in the GitOps Playground, it is striking that the pipelines for the two different use cases “K8s resources” and Helm are largely the same. Here, extraction into a Jenkins shared library allows for reuse and less maintenance in Jenkins pipelines. This lead to the development of the GitOps-build-lib, which is now home of the pipeline logics described in this article.

Finally, it should be noted that using a Helm operator can have advantages even without GitOps: The source and version of the chart are declared as IaC (in YAML) instead of within a Jenkinsfile. This is simply applied to the cluster. The pipeline no longer requires a Helm binary. The same procedure also works for local development.

Conclusion

There is no question that CD provides added value. This article uses CD implementation examples with K8s and Helm deployment to show that it is quite possible to use Jenkins for implementation with both CIOps and GitOps. So the answer to the “CIOps or GitOps” question lies in the implementation details. Both can work well in practice. If you already have existing CD processes, you should only switch if the advantages of GitOps provide significant added value in your individual use case. You should not underestimate the amount of migration effort that will be required: If you have a lot of pipelines, you will also have to migrate a lot of pipelines. For newcomers, it is a good idea to start directly with GitOps due to its numerous advantages. However, this makes the already steep learning curve even steeper. The full examples can be found on GitHub in the repositories for CIOps and GitOps. This article did not consider the differences between various GitOps operators. That is an issue in itself. A first step to approaching this in practice can be to visit the GitOps Playground or our post about GitOps tools that also compares Flux and ArgoCD.

Tags