Five DevOps Terms You Need to Know

DevOps is responsible for delivering applications and their services in a repeatable, reliable, and cost-efficient manner. Nothing is worse in DevOps than a changing process that never seems to settle down and become repeatable. Luckily, over the last couple of decades the DevOps community has adopted some universal tools and techniques.

CI/CD (Continuous Integration and Delivery) is a technique that has matured to the point that technologies are available in commercial and open-source communities. TeamCity, CircleCI, Artifactory, Jenkins, and TravisCI should be common names in any DevOps toolbox.* But to truly understand how these tools can be best-utilized, let's take a look at five common terminologies that are used.

1. Pipeline

Code pipelining is a technique to describe how code flows through a process, from software engineering checking in to being deployed into a production environment. Code pipelining can include several steps before the code is validated, verified, and compiled into something that can run in production.

There are two major types of code pipelining available. “Pipeline as code” is checked in with the code itself; typically, a YAML file checked into a GitHub repository with the code. This allows the development teams to make changes to the build process for the individual application or service very easily. The disadvantage is that developers can cause too much variability in the build process. Each of the CI/CD tools has some flavor of this “pipeline as a code,” which should be accounted for when developing.

The other flavor of code pipelines is a de-coupled approach where the pipeline is defined in the CI/CD tool through a web interface. The code and the build process are only bound together in the CI/CD tool. This gives DevOps complete control as modifying the pipeline can be locked down to the roles defined in the tool and updated in one place. The downside is the de-coupling can cause a bi-furcated build process across multiple environments. As I admitted in an earlier blog, developers (myself included) tend to create their own processes if they feel like a lack of access is hampering their creative flow.

Each technique has its pros and cons. Understanding them is important to being successful. In many cases a mixed approach works pretty well. Just make sure you document why you chose the direction you did, otherwise you’ll tend to keep making the same decision over and over again.

2. Environment

The concept of environment has been around since the birth of DevOps—we’ve always created a different environment for different kinds of work to create new applications and services. Setting up several test environments for performance, capacity, memory leak testing, and debugging can be necessary. Each environment has unique policies and set-up scripts.

A typical pipeline will utilize environments to push code for an application or service through a battery of steps to build, package, test, validate, and deploy. Understanding the relationship between your environments and code pipeline is key to creating a generalized process that many projects can follow through.

3. Stages

A stage is the state of the software. Easy enough, right? Each stage, typically works in a single environment and performs several steps on the code to create artifacts that can be packaged up and stored in a registry or repository. An example of common stages is: Build > Test > Publish > Deploy. Each stage can be broken into additional stages, if necessary. Stages are high-level concepts that are used to communicate what state the software is in to all of the developers working on service or application, and to potential cosumers who might be anticipating a release date.

4. Steps

Steps are where the actual work gets done. Steps are executed in parallel or in sequence. Examples of steps can be compiling code, running smoke tests, running unit tests, building a container image, or installing or configuring software. Anything you can do on the command line you can do in a step.

DevOps engineers sometimes have a dilemma about whether to put commands in steps or in scripts. Typically, steps are simple. Any logic should be put into scripts, even though many code pipeline languages are now starting to have conditional and loops in them. Putting logical statements in scripts gives you more portability of your application to different build systems, preventing vendor locking and brittle processes.

5. Registry

Registries are a key repository for storing and distributed “derived objects” from the build process. There are two types of registries that DevOps utilize: staging and golden registries (Fig. 1). The idea here is that as you first produce container images, libraries, or other binaries (derived objects), then place them in a staging registry. Next, when running tests, you pull those derived objects from the staging registry into the test environment.

When all tests pass (or when they pass with a specified threshold), you promote the derived objects to a golden registry, which is used to update production environments. This is a simplistic view; sometimes there can be several layers of promotion registries. I’ve been in organizations where we had bronze, silver, and gold registries, depending on the progression of containers through a long test cycle. Rather than focus on the number of potential registries, the key here understanding the overall concept.

Fig. 1

Your Plumbers License

Now that you know the different technologies and methods that are used in building, testing, and deploying applications, you can try these things out yourself. As I mentioned at the start of this blog, several tools support these concepts, including Jenkins, TeamCity, CircleCI, and TravisCI. But these tools aren’t perfect by themselves; you need to have a plan and an architecture on how you are going to use them.

In the next blog, I’ll cover an implementation of a simple DevOps System sitting on top of a hybrid cloud using Jenkins and Docker Swarm*. If you have any questions about this blog or my larger DevOps blog series, please send me a message on LinkedIn or on Twitter at @darrenpulsipher. I also encourage you to check out my webinar Solving the Gordian Knot of DevOps in the Hybrid Cloud, which can be streamed online at your convience.

Published on Categories Cloud ComputingTags , , , , ,
Darren Pulsipher

About Darren Pulsipher

Darren is an Enterprise Solution Architect in the Data Center Group at Intel. He works directly with governments and enterprise organizations to help them understand the cloud components and elements they need to consider to achieve their goals. He addresses workload migration, developing in the cloud, hyperconverged infrastructure, and modern data center architectures. Prior to joining Intel, Darren was the CIO of a juice company called Xango. His research has resulted in patents in cloud and grid computing infrastructures and his technology has been used in companies to decrease product development lifecycle time through build, test and deployment optimization and virtualization. Darren is a published author with 3 books on technology and technology management, and over 50 articles published in several different industry trade magazines.