Increase DevOp Reliability by Decoupling Services

In my last blog, I wrote about some modern DevOps processes that are driving new development models and bringing a fair amount of complexity into enterprise software and services. Because the goal of all DevOps processes is to increase the deployment of dependable releases over a shorter cycle, I want to dig into one tactic for reducing some of this new complexity: Decoupling services, deployment, and infrastructure.

Developing and deploying applications in the cloud can seem pretty easy on the surface. Many cloud service providers have tools that let developers spin up services and deploy software quickly. It has never been easier to distribute software to the world, but relying solely on one cloud service provider can mean vendor lock-in, causing rigidity when developing and deploying your applications. Instead, consider architecting, deploying, and maintaining your solution for build, test, and release in a hybrid cloud solution to give you the flexibility to change cloud strategies or partners depending on business and technical needs.

hybrid cloud devops
Fig.1: Gordian knot of DevOps services and hardware.

AWS, Azure, Google, and IBM spent years creating cloud services marketed as a “one-click” decision, but that is not the complete picture. You need to deconstruct the complex system above (Fig. 1) into something more manageable. First, group infrastructure components together to form subsystems (Fig. 2) and you will quickly find toolsets that look familiar. The integration of these toolsets is key to producing a flexible DevOps platform.

devops architecture
Fig 2: High level DevOps architecture.
  • CI/CD (Continuous Integration and Delivery system/build system)—There are many tools to choose from, including Jenkins*, TeamCity*, TravisCI*, CircleCI*, and Bamboo*. Just make sure you have the cloud plugins for the tool you choose.
  • SCM (Software Configuration Management)—There are a handful of real options here too, including GitHub*, GitLabs*, and BitBucket*.
  • Artifact Repository—This is where you will store images, service definitions, secrets (e.g., passwords, SSH keys), etc. You will need to look at local repositories as well as public ones. Consider Docker* repo for your container images, Helm* charts for services definitions, and any of the SCM systems (GitHub, GitLabs or BitBucket) as a good repo.
  • Multi-Hybrid Cloud—This is a new area with many interesting options, but it will require some integration on your part. You will need a Cloud Management Platform, including a cloud broker, an automation framework for installing and configuring software, and a service orchestrator (PaaS light) for deploying applications and services in the hybrid cloud.

Now that you have identified the subsystems, you need to look at the key artifacts that you will be working with—namely services, images, deployments, and pipelines.

Service Definitions (Service Orchestrator Layer)

In the cloud-aware development world, people talk about applications being made up of services, micro-services, and nano-services. A service is a construct that “provides the act of doing work for someone or something.” A service does not imply implementation of that service­—that is something different. Think about a virtual machine (VM) or containers, for example. To define a service, you need to describe what the service does, how to access it, and what resources the service requires. Some of the ways you can do that include Docker stack definitions, Cloud Foundry* tiles, or Helm charts.

These service definitions can be simple or complex. A simple service might be a load balancer like NGINX*. When instantiated, it runs in one container, VM, or bare metal. You can also create a hierarchy of services that allow you to bundle simple services into reusable, complex services like a “LAMP stack” service. NGINX has four or more services that include storage and a network connecting the services. Another option is to define an application by combining these services. For example, a WordPress* application using the LAMP stack complex service and a NGINX complex service.

To orchestrate services, we need to understand some key concepts of service definitions so we can coordinate the spin up, and tear down of services or groups of services (complex service).

Images (Service Orchestration Layer, and CI/CD)

Service definitions contain an import field in their descriptions. They contain the concept of an image. An image is a snapshot of a container, VM, or bare metal compute environments. Each one of the compute ecosystems has its flavor of the image, but they do the same thing. They give an instance of a service an operating environment to execute its code.

Deployment Manifest (Service Orchestration Layer)

The deployment manifest describes how to instantiate a service by the image (defined above). The manifest can include the number of replicas of the services, the configuration of software, installation of new software, storage mount points, and network configurations. It allows you to use a set of common images and get different capabilities through configuration or the deployment manifest. The manifest also describes the cardinality (scalability) of the service. Is it running one instance? Five instances? Or does it scale based on some external metric or event?

Code Pipelines (CI/CD)

Code pipelines are a great construct that all DevOps engineers should understand. The code pipeline pushes code you have written through a code factory that builds, tests, and deploys your code. In a hybrid cloud model, this allows the flexibility to control multiple environments and clouds with the code pipeline. All of the CI/CD tools have some flavor of code pipeline definition languages.

One of the best techniques I recommend is to have the code pipeline build images (container, VM, or bare metal ISOs) and then deploy them into the different environments for testing or production. I also typically define my pipeline right with my code, so everything is bundled together in a nice SCM Repository (GitHub). Examples of code pipeline definitions are Jenkinfiles* (Jenkins), Workflows (CircleCI), build pipelines (TeamCity).

Environments

Environments are pretty easy to understand. You probably already have the concept of dev, test, and production environments. The key to using environments is to make sure that your hybrid cloud layer and your CI/CD system share the same names and concepts. If there is a mismatch, things can get very confusing. Make sure you understand how everything is connected. Another benefit of environments is that you can have an application stack (set of services) that behave differently in the different environments, giving you the flexibility you need when building out services in your application.

It can seem somewhat overwhelming when you start looking at all of the moving parts of a hybrid cloud DevOps solution, but if you group things into subsystem toolsets and understand how they work together, then it can be a powerful way to accelerate the development and deployment of applications. Don’t forget about the middle hybrid cloud layer as it will give you the flexibility to utilize the best of public and private cloud services. Decreasing the dependencies between the different layers of your DevOps solution means things can change independent of each other, increasing your reliability and scalability.

In my next blog, I’ll explain why planning for hybrid cloud is essential for reducing complexity in your DevOps processes and go into greater detail about the differences between hybrid cloud and multi-cloud models. Until then, you can watch my recent webinar Solving the Gordian Knot of DevOps in the Hybrid Cloud.

If you want to learn more about hybrid cloud strategies and workload optimization, visit intel.com/cloud for a wide range of solution briefs, videos, and more. If you have any questions about this blog or would like me to cover specific insights in my larger DevOps blog series, please send me a message on LinkedIn or Twitter at @darrenpulsipher.

Published on Categories Cloud ComputingTags , , , ,
Darren Pulsipher

About Darren Pulsipher

Darren is an Enterprise Solution Architect in the Data Center Group at Intel. He works directly with governments and enterprise organizations to help them understand the cloud components and elements they need to consider to achieve their goals. He addresses workload migration, developing in the cloud, hyperconverged infrastructure, and modern data center architectures. Prior to joining Intel, Darren was the CIO of a juice company called Xango. His research has resulted in patents in cloud and grid computing infrastructures and his technology has been used in companies to decrease product development lifecycle time through build, test and deployment optimization and virtualization. Darren is a published author with 3 books on technology and technology management, and over 50 articles published in several different industry trade magazines.