As described in the recently published white paper, “Securing the Cloud for Enterprise Workloads: The Journey Continues,” Intel IT is using cloud computing, guided by a multi-cloud strategy, to accelerate innovation and application development. We recognize the power of containerization technology—containers are agile, fast, and flexible, but from an enterprise IT perspective, container security can be challenging.
Because containers in a service are typically built using a common OS and kernel, all the containers of a service can potentially be compromised simultaneously. We have developed a plan that will enable us to take advantage of containers while still meeting our stringent information security requirements. We are very close to putting the plan into production.
Risk Assessment and Solution Choice
As with any new technology that we are considering adopting, our initial task with containers was to pinpoint the potential risks the technology poses to the enterprise. A thorough risk assessment of container security provided us with a clearer picture of what needed to be done to adopt the technology in a secure manner. Once we identified the potential risks from the technology, we determined whether we had a solution that would fit our minimal security requirements. Examples of such requirements include securing the registries and being able to perform static scanning on code and dynamic scanning during run time.
In addition to evaluating solutions that were already in use at Intel, we also examined what was going on in the industry—who the leaders are and what they have already done. From a technical debt perspective, it makes sense to use an existing solution instead of a new one; however, if the existing solution is not adequate, adopting a new solution is necessary. We chose a third-party product that can perform a vulnerability assessment of containers and that can integrate with our existing cloud security solutions to help reduce operational overhead while aligning with Intel’s technical debt-reduction goals.
Container Security Checkpoints
As Intel’s developers begin to use containers to help the business accelerate, we want to be sure that no vulnerabilities are introduced into the builds. That means we need visibility into the container images and the ability to control which images are allowed to be pushed to registries. We are developing end-to-end reference architectures that define how containers are consumed in the public cloud environment. These reference architectures will define specific checkpoints in the lifecycle of an application.
Stage 1: Making the container image lean
We will avoid “container bloating” by assigning the container only those resources that it needs. This is important because bloating potentially exposes a container to attack. At this stage we need visibility on the basic image (source code and libraries). We will perform full scanning and allow or fail the image based on compliance with policies. Scanning includes sending both internally developed and open source code through a filter. If the base image has a medium-high or critical vulnerability, it is not allowed.
We intend to provide repositories of code sections that have a minimum security certification to make it easy to build on someone else’s work in a secure manner—again making it easier to develop applications and services faster without compromising security.
Stage 2: Securing the test/QA environment
Once we know we have a secure image in the container, development can move to the next stage, QA testing. At this stage we will implement additional controls to ensure images in the registries are vulnerability-free. This means more than a one-time scan, because we know new vulnerabilities are discovered every day. Therefore, we plan to continually scan the images in the registries. If a vulnerability is found in a trusted image, we will redeploy a previous version that is known to be secure.
At this point, dynamic scanning can help detect run-time vulnerabilities. We also will identify processes that are common and use enforced white listing to keep the number of processes to a minimum. Visibility into the underlying infrastructure helps us further protect against security incidents. We have established traceability of who owns an application, so when someone deploys a container that is associated with a particular application, these controls will automatically be activated.
Stage 3: Securing containers during production
We will perform run-time scanning of containers and the host on a regular basis. Because we will have already defined what compliance means for these applications, information found in those applications will be associated with the most common vulnerability exposures (CVEs). By integrating visibility with CVEs and enforcing policies and run-time controls, we can verify that the application and the information within the application are secure.
Stage 4: EOLing an application
We maintain a list of applications that are live in our environment. If an application owner wants to take their application out of production, they will follow a well-defined set of steps, which will prevent us from scanning a blank space.
The Container Security Journey Isn’t Complete
We have already implemented much of our plans and have a good idea of where we're going. But technology never stands still, and there are still options to be explored and more to learn. Share in our journey by reading the IT@Intel White Paper, "Securing the Cloud for Enterprise Workloads: The Journey Continues.”