Confidential Computing—the emerging paradigm for protecting data in-use

As computing moves to span multiple environments—from on-prem to public cloud to edge computing—organizations need security controls that can safeguard sensitive intellectual property (IP) and workload data wherever the data resides. Highly regulated applications and mission critical applications need data protection through all its modalities—at-rest, in-transit and in-use, for companies to migrate the data to the cloud where there is a lack of control and visibility in a multitenant environment. As an industry we have generally figured out how to protect data at-rest, and, in-transit. Confidential Computing (CC) is an emerging industry paradigm focused on securing the data in-use.

Today, it is the universal practice in cloud and enterprise that data at rest be protected using strong encryption in local and/or network attached storage. However, when the same data is being processed by the central processing unit (CPU), it is in plain text in memory and is not protected by encryption. Memory contains high value assets such as storage encryption keys, session keys for communication protocols, IP, personally identifiable information (PII) as well as credentials. With container, virtualization and cloud, multi-tenancy brings the additional dimension where virtual machines (VMs) or containers from two different customers could be running on the same machine. For a certain set of sensitive and regulated workloads, there is a desire to protect the data in use from the underlying privilege system stack. Therefore, it is critical that data in memory has comparable protection to data at rest in storage devices. This is the focus of Confidential computing—protecting data in use on compute devices using hardware-based techniques.

Enabling Confidential Computing

Industry is converging on two primary ways of enabling Confidential Computing—1) using Trusted Execution Environments (TEEs), and, 2) using an emerging mode of encryption called Homomorphic Encryption (HE). TEEs provide a hardware enforced memory partition where the data and code execute. This is the focus on this blog. HE is a technique where the data stays encrypted at all time, and the code operates on the encrypted data. While full HE is not yet available, partial HE is used selectively by a few early adopters because HE is computationally expensive. The expectation is that as technology evolves and HE standards are established, full HE will become practical for use.

Let us start with a brief summary of defining a TEE, popular examples of hardware TEEs, current and emerging deployment models for TEEs, and, key customer usage scenarios that leverage TEEs.

What is a TEE?

Software stack showing outline of TEE.

Trusted Execution Environment (TEE) is an area in memory protected by the processor in a computing device. It provides a hardware-enforcement to ensure that the code and data inside it have confidentiality and integrity protections. The code that runs in the TEE is operator/owner authorized, so it is attested and verified. The system is designed such that data inside a TEE cannot be read or modified from outside the TEE even by privilege system processes. The only time the data is in the clear is when it is in the CPU caches while executing. The advantage of the TEE is that you do not have to trust all the firmware and software that is in layers on which your software and data are being processed. The trusted compute base (TCB)—which is all the stuff that you have to trust—is very small. In most TEEs, the TCB is the CPU (hardware and microcode) and the code that you, as an owner, control. In some cases, the code is just your application, and in others, it might be a purpose-built micro OS and the application. 

Stack show how apps work with VMs.

Having a TEE can protect you from the following threats:

  • Malicious/compromised admin—a bad actor at the Cloud Service Provider (CSP) cannot access the TEE memory even with physical access to the server.
  • Malicious/compromised tenant of a hypervisor – a rogue app, or, compromised component of the system cannot access the TEE memory even with a privilege escalation on the Operating System (OS) or virtual machine manager (VMM).
  • Malicious/compromised network – a rogue app or actor using a compromised network to get access to the data/IP inside the TEE.
  • Compromised firmware/BIOS – Tampered BIOS or firmware will not be able to access the TEE memory.

TEEs can enable encrypted data to be decrypted and processed in CPU while lowering the risk of exposing it to the rest of the system, thereby reducing the potential for sensitive data to be exposed while providing a higher degree of control and transparency for users. The industry has multiple examples of these including—TPMs, Intel® Software Guard Extensions (Intel® SGX), Intel® Trusted Domain Extensions (Intel® TDX), AMD Secure Encrypted Virtualization (SEV), ARM TrustZone, Microsoft Virtualization Based Security (VBS), and others, each supporting different security properties and threat models. Intel SGX enables a general-purpose ring 3-enabled TEE. Intel TDX, AMD SEV (and its variants) provide a VM-based TEE environment. As Confidential Computing is gaining momentum, expectations are that more technology solutions will emerge from hardware and service providers.

Deployment models for Confidential Computing

With that introduction to TEE, let us look at the emerging deployment models for TEE. There are two viable models for deployment.

Software stack showing per-process TEE.

Model # 1: Per process TEE — This is the ‘TEE per executing process’ approach and is the one used in most of the current deployments. In this model, you refactor the application by separating the application into trusted and un-trusted portions. The unit of isolation is the granular piece of code and data that is considered the trusted portion. As a developer you use a TEE software development kit (SDK) to program the trusted portion of functionality to run inside a TEE. You invoke this functionality in the TEE from the un-trusted portion of your application. Each TEE provider has their own SDKs today. For example, Intel SGX has the Intel® SDK, and Microsoft has the Open Enclave SDK that supports Intel SGX, and VBS. The CCC is trying to provide an abstracted API—called the CCC SDK. You can write to this SDK, and target it to a specific TEE during compile/build time. As you can see in the picture above, you can target it for Intel SGX as a TEE, or Microsoft VBS/VSM. This is a compile/build time choice, and you have an executable that is target built for a specific TEE and can be deployed on to machines that support that TEE. An application can have multiple TEEs if it needs to, and the TEEs can be launched anytime during the lifecycle of the application. This is a good approach for new applications being built to use a TEE, where you can clearly identify and isolate the trusted portions of your application and run them inside the TEE. In this model, you have to have knowledge of the TEE—how the TEE works, the APIs and the nuances of using and setting it up.

Software stack showing app level TEE.

Model # 2: APP level TEE — The second emerging model for building and delivering TEEs is the per-App level model, also referred to as the “lift and shift” model. In this model you don’t refactor or rewrite the application. You build your application as you do today, and as a part of the deployment, you specify the TEE that you would like to use. From a security standpoint, the TCB for the data owner is different. You don’t need to use any TEE SDK, or, understand the intricacies of how the TEE functions. The middleware such as Enarx, an application deployment system, which doesn’t need to be a Trusted entity, would initiate the launch of the application with the configured TEE. The runtime (which is trusted and is on every compute node) manages the creation of the TEE with the application and launch of the application. The industry is starting to introduce multiple solutions. Let us look at a Redhat example to illustrate the model. Enarx is a Redhat project that supports this App-level TEE model. As you can see from the picture, you build your application like you normally do with your favorite programming language and SDKs and compile it for Web Assembly (Wasm). Wasm is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages like C++, Rust, etc., with a compilation target so that they can run on the web & cloud services. This Web Assembly binary can be targeted to run with the configured TEE.

No changes to the binary are required. The runtime manages it. For instance, if the choice of TEE is Intel SGX, the Enarx runtime creates an Intel SGX Enclave with the application and launches it. If the choice is VM-based TEE, the runtime can create a VM with the application to execute and launch it. The runtime also provides the necessary plumbing for attestation of TEEs, and itself. Attestation is a critical requirement to truly support Confidential Computing. Attestation provides the owner of the App the definitive mechanism that the TEE is a genuine one, and proof that their App is not compromised during launch/execution.

Fortanix, Graphene, LKL, are some of the other popular solutions that support this deployment model for Intel SGX-based TEEs using a Library OS approach. No changes are required to be made to the applications. The Graphene/LKL runtime links the library OS to the application dynamically and launches the application. We are glossing over some details on attestation, file/network shields, etc., to illustrate the general workings on the model.

Even though model#1 is the one currently being used, we expect model #2 (the lift and shift model) to be the dominant one in the near term, both for new applications and for existing applications. However, over the long run, as developers gain experience with security and CC SDKs mature, it is possible that model#1 could become the model of choice for some implementers of Confidential Computing.

Customer Usage Scenarios for Confidential Computing

The next question to address is the key customer usage scenarios that would leverage a TEE to deliver Confidential Computing.

With ubiquity of virtualization, container, and cloud computing, most applications and workloads either run as VMs, or packaged as containers (running on bare-metal OS, or in a VM). The critical set of usage scenarios would be to enable these VMs or containers to be protected using a TEE. This would provide the isolation required for the applications.

The table below highlights the usage scenarios for Confidential Computing (CC). Details for each of these scenarios will be the focus on the next set of blogs.

CC Usage Scenario Description  
1. Isolation of Applications running in VMs – No refactoring. No changes to, or, refactoring of Apps needed. This should cover both legacy VMs and new VMs. The unit of isolation is the VM, meaning that the entire VM would launch inside a TEE. Existing end-to-end data protection principles help ensure security beyond isolation 1. Isolation of Applications running in VMs.
2. Isolation of bare-metal applications Containers – No refactoring. No changes or refactoring of Apps needed. The unit of isolation is the container. Every application container is launched inside a TEE. This should support both long-running containers like network function virtualization (NFV) workloads, but, also short burst containers, which are sensitive to start-up and teardown times. Isolation of bare-metal applications Containers.
3. Isolation of application containers running in a VM. In this scenario, multiple applications are launched as containers inside a virtual machine (VM) and isolation is at the VM level. The VM (and the containers) all execute inside the TEE. All the containers have the same level of isolation inside the VM. 3. Isolation of application containers running in a VM.
4. Isolation of functions as Computing units. Today, most function as a service (FaaS) platforms use the container model for execution of functions. The unit of isolation is the container. To support CC, the container would be launched in a TEE, and functions are injected into the containers at runtime for execution. This model will evolve as function computing gains broader adoption with new technologies and approaches. Isolation of functions as Computing units.

Confidential Computing is a promising paradigm to provide in-use protection for applications and data. This first blog hopefully provided the background on TEE-based approach for enabling Confidential Computing and the deployment models that the industry is developing. We introduced some customer usage scenarios. Stay tuned for the next blog in this series as we elaborate on the usage scenarios and highlight the reference architecture with the different solution stacks.

Written by Raghu Yeluri, Sr. Principal Engineer, DPG (Data Platforms group) at Intel Corporation, and Murugiah Souppaya, Computer Scientist, Computer Security Division, Information Technology Laboratory at the National Institute of Standards and Technology (NIST).

Notices & Disclaimers
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation in the U.S. and/or other countries. Other names and brands may be claimed as the property of others.