How to Set Up a Simple DevOps Multi-Hybrid Cloud System

So far in my blog series I’ve written about the business benefits, strategy, and terms and theories for DevOps. Now it’s time to get to something more practical: Implementation of a simple DevOps system sitting on top of a hybrid cloud—in this case using Jenkins and Docker Swarm*. Hopefully this process will give you a better understanding of all the steps that go into creating a DevOps system. Keep in mind that our ultimate goal is to deliver applications and their services in a repeatable, reliable, and cost-efficient manner.

To simplify things, let’s look at a small configuration of DevOps across multiple clouds using open-source software to get started. We’ll first look at the high level flow of our build and test processes across multiple clouds (Fig. 1).

Intel Cloud
Fig. 1

Conceptual Architecture

Next, we’ll look at what systems are needed to implement the use cases for building, testing, and deploying software through pipelines. Those key elements should include:

  • Continuous Integration and Delivery (CICD)—pipelining automation and workflow management
  • Provision Engine—deploying and configuring software
  • Artifact Repository—where to store built images and libraries
  • Environment Management—setting up and managing environments across the hybrid multi-cloud
  • Storage—common storage for temporary storage space across build agents

In the following diagram (Fig. 2), you can see how these elements are related and interact with one another.

Intel Cloud
Fig. 2

Now that we’ve mapped the elements and how they relate, it’s time to pick some tools. Below is a list of tools that are well-known in the industry and have good sized communities around each product, along with several blogs about how to integrate them all together. You can see how each tool connects to the corresponding stage seen below (Fig. 3):

  • CICD—Jenkins*
  • Provision Engine—Docker Swarm
  • Artifact Repository—Docker Registry* and Docker Hub*
  • Environment Management—integration of Jenkins and Docker Swarm (Docker Swarm Plugin)
  • Storage—S3* on AWS and Gluster* for on-prem clusters
Intel Cloud
Fig. 3

Putting It All Together

With our tools mapped out, we need to look at how to configure the different tools so they can talk to each other. I chose Docker Swarm for its simplicity to install and configure.

Installing this is really straight forward and took me about five minutes to install and get a swarm up and running. I also made an architectural decision to run a swarm for each of my environments: build, local, dev, test, and production.

Here are the basic steps to install and set up a Docker swarm:

Now that we have individual swarms set up, we need to come up with services to run in each of the environments. For simplicity’s sake, we are going to focus on the build environment and then make all of the other environments run Jenkins build agents.

In the build environment we need to run the Jenkins build server, Docker Registry, a build agent, and the storage for each of those services (Fig. 4).

Intel Cloud
Fig. 4

Here is an example of a Docker compose file for the build environment with all of the services defined:

version: '3.1'
services:
  build:
    image: jenkins
    ports:
      - "8000:8080"
      - "50000:50000"
    volumes:
      - /mnt/jenkins:/var/jenkins_home
  agent:
    image: madajaju/jenkins-swarm-node-agent
    deploy:
      mode: global
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /tmp:/tmp
    secrets:
      - source: jenkins-v1
        target: jenkins
  registry:
    restart: always
    image: registry:2
    environment:
      - REGISTRY_HTTP_TLS_CERTIFICATE=/var/lib/registry/registry-certs/domain.cert
      - REGISTRY_HTTP_TLS_KEY=/var/lib/registry/registry-certs/domain.key
    ports:
      - 5000:5000
    secrets:
      - registry-v1-cert
      - registry-v1-key
    volumes:
      - /mnt/registry:/var/lib/registry
secrets:
  jenkins-v1:
    file: ./jenkins-secret.txt
  registry-v1-cert:
    file: ./registry_certs/domain.cert
  registry-v1-key:
    file: ./registry_certs/domain.key

OK, there a couple of things here that you might not be familiar with: secrets and volumes. Let’s dive into those quickly.

Secrets

The secrets tag in the YAML file allows you to store security keys or authentication in a secure location that can be accessed by the swarm orchestrator and services it is launching.

(For more information on Secrets check out the Docker documentation.)

We are creating a “jenkins-v1” secret, where we are importing a file that contains the login information for the Jenkins agents running in the different environments. When a container is started, it automatically passes the secret contained in the secret store named “jenkins-v1” to the start of the container. This allows the Jenkins agents to connect to the Jenkins master in a secure manner.

(See the Jenkins-swarm-node-agent image description in Docker Hub form more information.)

Don’t forget to create a “Jenkins-secret.txt” in the directory you launch the docker stack before you deploy the stack. Let’s look at what I put in my Jenkins-secret.txt file.

To start, before launching our build environment, we need to take a note of where the build server is located and how to connect to it from the different environments. You need to make sure that you put the build swarm on a host that is accessible to all environments. In this case I named the build host “build” and it name resolves from all of the environments correctly.

-master http://build:8000 -password admin -username admin -retry 100

Second, we create two tags for the registry-v1-cert and registry-v1-key. These creds are used for access to the Docker registry that we are running in the build environment to store temporary images that are being created during the execution of the code pipeline. Remember our picture (Fig. 1) where we have two repositories: one for production (aka gold builds) and one for staging. This registry is the staging repository.

It is important that all of the environments are created with access to the creds for the registry. If not, then the build-agents will not be able to push images to the registry. So make sure that when you deploy the stacks for the environments that you have the registry certs directory available in the “/etc/docker/certs.d/<registry addrs>/” directory. In this simple example you would copy the creds to the “/etc/docker/certs.d/registry:5000” directory, where registry is a hostname that all of the hosts in your swarm know how to access.

(For more information on this process, check out this blog about secure registry setup.)

Volumes

The volumes tag is used to show what data volumes are “mounted” into the container. This gives you the ability to share information across containers and have persistence between different instances of a service running. In this case we want to have a persistent registry. So we mount a volume from the host machine into container using the volume tag.

This has its limitations as it requires that the container run on the same machine that the hosted volume is available on. This limitation is typically not desirable. So you need to use a distributed storage solution. For cloud deployments, using any of the software defined solutions should be good enough. Just make sure that all of your hosts in your swarm have access to the same volume and that it is available on the same mount point.

(You can find more information about volumes from the Docker documentation.)

Deploying the Environments

Now that we’ve covered secrets and volumes, it’s time to deploy the environments. As I already mentioned, we will use a YAML file to describe the build stack to set up the build environment. To launch this stack we just need to go to the swarm master of the build environment and deploy the stack.

# docker stack deploy –compose-file docker-compose.yaml build

The build environment should now be up and running. To confirm, you should test things and make sure you have everything accessible. Open up a browser and go to the Jenkins server: http://build:8000

Remember that the other environments are pretty simple and just require one service to run: the build agent service. There is really only one change to the Docker-compose file for each environment; you just need to set the LABELS environment variable to designate the appropriate environment.

version: '3.1'
services:
  agent:
     environment:
-	LABELS=”production”
    image: madajaju/caade-swarm-node-agent
    deploy:
      mode: global
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    secrets:
      - source: jenkins-v1
        target: jenkins
secrets:
  jenkins-v1:
    file: ./jenkins-secret.txt

For each of the environments, we simply need to change the LABELS environment variable to the name of the environment. This environment must match the name of the environment you specify in the agent tags in your code pipeline definition and then on the swarm master for the environment you have set up, for which you need to launch the environment.

# docker stack deploy –compose-file docker-compose.yaml build

Code Pipeline Definition

Now that we have all of the environments up and running, we need to actually run a build. The best way to do this is to configure Jenkins to import code from a Git repository. In this case, I created a simple program in Github, so all we need to do is to go to Jenkins and import the GitHub repository.

(For more information go to Jenkins documentation.)

If you have a Jenkins file in the root of your GitHub repository, then Jenkins will use that file as your code pipeline definition and automatically create all of the stages and steps for you automatically. Here is an example of a simple pipeline that uses our different environments to build and release a nodejs application:

pipeline {
  stages {
  stage ('Build') {
        agent {
            label 'dev'
        }
        parallel {
            stage('Build Docs') {
              steps {
                sh 'npm run build-doc'
              }
            }
            stage('Build Services') {
              steps {
                sh 'npm run-script build'
                sh 'npm run-script deploy-apps'
              }
            }
        }
    }
    stage('Test') {
      agent {
        label 'test'
      }
      steps {
        sh 'npm run deploy-test'
        sh 'npm run test'
        sh 'npm run teardown-test'
      }
      post {
        always {
          junit "report.xml"
        }
      }
    }
    stage('Production') {
      agent {
        label 'production'
      }
      steps {
        sh 'npm run deploy-prod'
      }
    }
  }
}

As you can see in the pipeline, we specify the different environment we want the steps to run with the “agent.”

      agent {
        label 'test'
      }

Remember, we need this to match what we specified in the LABELS environment variable when we launched the environment stack.

(If you would like a more complete description of this setup you can find more at the CAADE Docker Solution.)

With this step finished, we’ve completed our implementation of a simple DevOps system sitting on top of a hybrid cloud. Congrats!

Projects to Watch

It turns out that our “simple” example isn’t all that simple. Though lengthy, hopefully this gives you a better understanding of what goes into a DevOps system.

But this isn’t the only way, there are many recent projects that are looking to simplify the process and to incorporate Kubernetes into the ecosystem. Here’s a list of some of the projects I’m keeping an eye on that might help you, too:

  • Jenkins X* – Jenkins has brought this into the mainstream and has a Kubernetes integration for dynamic deployment
  • Circle CI* – hosted solution for micro-service development
  • Red Hat OpenShift* – integrated Jenkins, K8s and Cloud Foundry
  • IBM Cloud Private* – multi-cloud integrated build, test, K8s and software registry
  • GitOps* – framework and best practices for using Git as an operations repository for stateful operations management

If you have any questions about setting up a DevOps system or any of the tools mentioned, feel free to send me a message on LinkedIn or on Twitter at @darrenpulsipher. You can also watch my webinar Solving the Gordian Knot of DevOps in the Hybrid Cloud for further insights. Thanks for reading.

Published on Categories Cloud ComputingTags , , , ,
Darren Pulsipher

About Darren Pulsipher

Darren is an Enterprise Solution Architect in the Data Center Group at Intel. He works directly with governments and enterprise organizations to help them understand the cloud components and elements they need to consider to achieve their goals. He addresses workload migration, developing in the cloud, hyperconverged infrastructure, and modern data center architectures. Prior to joining Intel, Darren was the CIO of a juice company called Xango. His research has resulted in patents in cloud and grid computing infrastructures and his technology has been used in companies to decrease product development lifecycle time through build, test and deployment optimization and virtualization. Darren is a published author with 3 books on technology and technology management, and over 50 articles published in several different industry trade magazines.