Skip to main content

Running GoCD on OpenShift – An Alternative to Jenkins

By July 15, 2021No Comments

Running GoCD on OpenShift – An Alternative to Jenkins


At the recent DevOps Days, there was a break-out session that was titled something along the lines of "anything other than Jenkins". Curious by the title, I jumped in see what other tools people had experience with and how they were using them [in their kubernetes clusters]. Maybe this shouldn’t have been a surprise, but everyone in the room was primarily using Jenkins and most of the conversation was about how they were using it. While this isn’t a bad conversation, it highlighted one point, which was that we often stick with what we know.

As someone who works on a lot of OpenShift environments, I’m intimate with the challenges that Jenkins brings (security updates anyone?), but am also a fan of how OpenShift made it usable for developers. Primarily the custom integrations that Red Hat pre-packages inside of their Jenkins instance to support:

  • A multi-tenant deployment model
  • Ephemeral or persistent instances
  • Auto Provisioning of instances
  • OAuth integration for simple access control management
  • Pipeline as code integration / synchronization with the OpenShift buildConfigs
  • Parameterized plugin installations

However, even though it’s easy to deploy, it doesn’t often get users up an running without a little more effort (do you want a scripted or a declarative pipeline? plugin or shell? exactly how many resources do I need to give this thing?). And from what I often hear, teams are looking for a simpler alternative. With one of our customers, a small team is deploying multiple CI & CD tools to perform a bit of a show & tell to the community, solicit feedback, and determine if we can land on a better option. One of the tools I’m evaluating is GoCD by Thoughtworks.

I’ve been playing with @goforcd for the last day or so to learn how its deployed and whether it’s a viable alternative to the way many OpenShift customers might be using Jenkins today. Here is what I’ve learned so far…

Ok… So How?

First of all, I will say that I’m a big fan of the documentation for GoCD. In the short time I’ve spent with it, it’s pretty straight to the point and thorough enough to use the tool.

From a very high-level architecture standpoint, I don’t see a huge difference in how it operates compared to Jenkins. In general, there’s a main server component that farms work out to agents as desired. It supports dynamic agents and auto-registration, which is similar to the cloud capabilities of Jenkins. If you dig deeper though, it appears that the database can be separated and highly-available deployments are possible. I’m sure there are plenty of other deeper architectural differences that I haven’t had time to explore yet.

<img src="" position="center"%}

The goal in this blog is to see if each team can run their own "ephemeral" instance in a codified manner. Which is definitely possible. Also, of note, this blog does not use the Helm deployment, which will be discussed.

Extending the Images to Support OpenShift

The standard docker images will not run on OpenShift due to the same reasons that many "off-the-dockerhub-repo" images won’t run. They often aren’t authored to support the security constraints that OpenShift appropriately imposes. In general, this can be summed up as the support for arbitrary UID’s (with fine grained rbac controls) and as a result, ensuring that those UID’s have the proper permissions on the application directories.

Note: All code for this blog is stored here.

In the above referenced repository, separate folders exist for the server and agent components:

  • The server Dockerfile starts on the latest version from GoCD and
    • Adds a few file/folder permission changes
    • Downloads the kubernetes elastic plugin (which isn’t required right now)
    • Adds a new entrypoint that:
      • Adds the current UID to the /etc/passwd file
      • Copies a configMap from a temp location into the necessary directory
      • Runs the startup shell script from GoCD
#DOCKERFILE FROM gocd/gocd-server:v19.5.0  # Get Plugins RUN mkdir -p /go-server/plugins/external/ RUN curl -o /go-server/plugins/external/kubernetes-elastic-agent.jar -s -L   # Set permissions for OpenShift RUN chgrp -R 0 /go-server && \     chmod -R g+rwX /go-server && \     chmod 664 /etc/passwd   ADD /  ENTRYPOINT ["/"]
# ENTRYPOINT #!/bin/bash  if ! whoami &> /dev/null; then   if [ -w /etc/passwd ]; then     echo "${USER_NAME:-default}:x:$(id -u):0:${USER_NAME:-default} user:${HOME}:/sbin/nologin" >> /etc/passwd   fi fi  CONFIG=/go-server/temp/cruise-config.xml      if test -f "$CONFIG"; then     cp $CONFIG /go-server/config/cruise-config.xml fi  /bin/bash

Custom Agents with Automatic Registration

Agents can be either manually admitted to the server, or automatically registered. In this environment, automatic registration is key so that agents can scale appropriately without the requirement of human interaction.

The agent is also deployed in a similar manner to the server, but has been customized based on the needs of our environment. The agent is based off an existing GoCD agent image, and adds:

  • Ansible
  • The OpenShift Client
  • A configuration file with auto-registration details
FROM gocd/gocd-agent-centos-7:v19.5.0  # Install Ansible RUN yum install -y epel-release && \     yum install -y ansible  # Obtain oc client RUN curl -o oc.tar.gz -s -L  && \     tar -xvf oc.tar.gz --strip-components=1 && \     mv oc /usr/local/bin/oc  # Set permissions for OpenShift RUN mkdir /go  && \     chgrp -R 0 /go && \     chmod -R g+rwX /go && \     chgrp -R 0 /go-agent && \     chmod -R g+rwX /go-agent && \     mkdir /.ansible && \     chgrp -R 0 /.ansible && \     chmod -R g+rwX /.ansible && \     chgrp -R 0 /go-agent && \     chmod -R g+rwX /go-agent && \     chmod 664 /etc/passwd   ADD /  ENTRYPOINT ["/"]
#!/bin/bash  # Give our arbitrary UID a name if ! whoami &> /dev/null; then   if [ -w /etc/passwd ]; then     echo "${USER_NAME:-default}:x:$(id -u):0:${USER_NAME:-default} user:${HOME}:/sbin/nologin" >> /etc/passwd   fi fi  CONFIG=/go-agent/temp/     if test -f "$CONFIG"; then     mkdir -p /go/config/     cp $CONFIG /go/config/ fi  # Run the existing entrypoint /bin/bash /

Note: If redeploying the server component, the agents also need to be redeployed since the registration process happens upon startup.

Agent Scaling

With at least 1 server, and 1 agent, how does this scale?

The easiest answer for our environment, in which each team runs their own instance, is to add a Horizontal Pod Autostaler to the agent deployment configuration. This was a straightforward process and worked easily in the lab. One element to note is that GoCD has an option to download materials within each stage. This should be set to true if autoscaling is turned on to ensure the agent downloads the appropriate git content when running a stage.

Elastic Agent Plugins

The Elastic Agent Plugin is great tool and decent for single-tenant environments. But, for multi-tenant environments it appears to require permissions at the cluster level for accessing node and event information. This is appears too open in a multi-tenant environment and will require a refactor of the plugin in order to make this functional for our use case.

Configuration as Code

One of the largest items that excites me about GoCD is the promise of configuration as code. I struggle a lot with multiple aspects of Jenkins that just don’t seem to be easily configured in this way. While Jenkins X looks like it’s on this path, we’re already looking for alternatives. The example here is very very simple, but it supports our ephemeral environments and provides an auto-registration code for agents. This may support a good portion of our teams needs at this point.

Note: All sensitive data is for demo purposes only. Please store sensitive data appropriately. This is an example only.

<?xml version="1.0" encoding="utf-8"?>      <cruise xmlns:xsi=""     xsi:noNamespaceSchemaLocation="cruise-config.xsd" schemaVersion="124">       <server artifactsdir="artifacts" agentAutoRegisterKey="29b2a287-da81-4593-a3e3-e253136ff7b3" webhookSecret="e65213d8-8055-4339-812e-e88e356da9bd" commandRepositoryLocation="default" serverId="190df8f2-9669-4864-bf61-5d63b4c6fa6e" tokenGenerationKey="356ad894-83d5-4078-b681-0cbe92dfe50e">         <backup emailOnSuccess="true" emailOnFailure="true" />       </server>       <config-repos>         <config-repo pluginId="yaml.config.plugin" id="statuspage">           <git url="" branch="status-page" />           <configuration>             <property>               <key>file_pattern</key>               <value>apps/statuspage/.gocd/pipeline.yml</value>             </property>           </configuration>         </config-repo>       </config-repos>       <pipelines group="defaultGroup" />     </cruise>

The configuration above also adds a repository to scan for pipeline jobs. It can easily be extended within the configmap, though I’m not yet sure if the server supports hot-reload or not.

Pipelines as Code

Yes, they exist, and are seemly easier than Jenkins pipelines (though likely less flexible). The following example is the pipeline code that I’m playing with that is referenced in the config-repo outlined above. In this instance, the custom agent was leveraged and simple shell scripts were called from our .gocd directory that simply auth into openshift with the oc utility and then run the ansible playbook. This was an easy workaround since there aren’t a whole lot of plugins for GoCD (which is likely a good thing).

format_version: 3 pipelines:   test:     group: defaultGroup     label_template: ${COUNT}     lock_behavior: none     display_order: -1     materials:       git:         git:         shallow_clone: true         auto_update: true         branch: status-page     stages:     - uninstall-dev:         fetch_materials: true         keep_artifacts: false         clean_workspace: false         approval:           type: success         jobs:           cleanDev:             timeout: 0             tasks:                         - exec:                 command: /bin/bash                 arguments:                    - -c                    - apps/statuspage/.gocd/                 run_if: passed     - install-dev:         fetch_materials: true         keep_artifacts: false         clean_workspace: false         approval:           type: success         jobs:           deployDev:             timeout: 0             tasks:                         - exec:                 command: /bin/bash                 arguments:                    - -c                    - apps/statuspage/.gocd/                 run_if: passed     - configure-dev:         fetch_materials: true         keep_artifacts: false         clean_workspace: false         approval:           type: success         jobs:           deployDev:             timeout: 0             tasks:                         - exec:                 command: /bin/bash                 arguments:                    - -c                    - apps/statuspage/.gocd/                 run_if: passed

Plugins in General

What’s there to say here… there aren’t a lot of them, but they do exist. It was easy enough to customize the agent with the tools needed and write the scripts necessary to execute them. This didn’t take any time at all and helps keep the team away from the "plugin update hell" that many Jenkins users experience. Our choice to augment our deployment patterns with Ansible provided us a lot of flexibility when it comes to secret management, chat integration, etc. Teams will need to determine some middle ground of "interconnect" to link GoCD into their existing systems.

That said, for the existing plugins that exist, they are simple jar files that are dropped into a directory of the container. No fussing about here.

OpenShift OAuth Configuration

At first glance, the auth plugins to GoCD are pretty limited. There was a generic OAuth plugin that was replaced by 3 specific plugins (GitHub, Google, GitLab) + a couple others. The Jenkins + OpenShift experience is pretty slick, so this code attempts to mimic that experience(within my 1 day testing time limit). This is close, but not entirely the same experience.

The below code leverages the OpenShift OAuth proxy to verify that a user has appropriate access to the namespace, and if so, grants access to GoCD. This is a basic form of authentication and should be further developed for production use. It does NOT:

  • Create or synchronize accounts into GoCD
  • Provide any decent or meaningful audit trail outside of the auth request
  • Provide RBAC within GoCD

With that said, it’s functional and does meet the needs for our teams to only allow their team members access to the environment, but ultimately they all share the same access level (which is a-ok with them);

    ## OAuth sidecar container                 - args:             - --http-address=:8080             - --https-address=             - --openshift-service-account=gocd             - --upstream=http://localhost:8153             - --provider=openshift             - --cookie-secret=SECRET                                         # This can/should be pulled from a file with a secure secret             - --bypass-auth-except-for=/go                                   # This seemed to help get rid of redirect loops             - --pass-basic-auth=false             - '-openshift-sar={"namespace": "${NAMESPACE}", "verb": "list", "resource":               "services"}'             image:             imagePullPolicy: IfNotPresent             name: oauth-proxy             ports:             - containerPort: 8080               name: oauth-proxy               protocol: TCP             resources: {}             terminationMessagePath: /dev/termination-log             terminationMessagePolicy: File
  • GoCD does provide a number of skeleton frameworks to help build plugins, and the existing OAuth plugins could be easily copied and modified to integrate deeper into OpenShift.

What does it Look Like?

  • While the server is starting up… this takes some time in an ephmeral environment due to init time. Persistence may be a benefit here.

<img src="" position="center"%}

  • Once fired up, the OAuth sidecar interjects. If a user has access to the namespace, they gain access to GoCD (as an anonomyous user)

<img src="" position="center"%}

  • Immediately the config-repo will start to sync and pipelines will show up

<img src="" position="center"%}

<img src="" position="center"%}

  • And the agents are visible

<img src="" position="center"%}

  • With a decent amount of visibility into the pipeline stages

<img src="" position="center"%}

<img src="" position="center"%}

So… the [current] verdict?

This was no SaaS CI & CD experience. It took a little bit of learning and effort to get running, but that was also to closely mimic the experience of Jenkins in an on-prem manner with teams owning their own instances. With the little time I’ve spent with the tool, it seems to be flexible and stable, and most of my time was spent on the integration and not actually learning the tool itself. This is a pretty big win.

I’m not sure that the architecture lends itself to a large scale "central" platform just yet, but I’d like to play with that idea more in the future.

From a resource perspective, the memory utilization seemed a bit on the high side (there’s quite a bit of Java in all of this), but the CPU utilization dropped very low after the initial startup of the server, and the agents can easily be autoscaled.

<img src="" position="center"%}

The big question: Is this a viable alternative to Jenkins running in OpenShift? I’d say yes.

Look out for more CI & CD tooling blogs.

We would love to hear from you.

//take the first step