⚙️
Morphisms: Confidential Serverless Containers
  • Introduction
  • Preliminaries
    • What is Confidential Computing?
      • Threat Model
      • Intel SGX
        • Threat Model
        • Memory Layout
        • Memory encryption
        • Enclave Lifecycle
        • Entering and Exiting an Enclave
        • Measurement
        • Certificate
        • Key Derivation
        • Attestation
        • Gramine
        • SGX2
        • Secret Key Provisioning
      • AMD SEV-SNP
        • Encrypted CPU Registers
        • Memory encryption
        • Secure Nested Paging
        • Virtual Machine Privilege Levels (VMPL's)
        • Cryptographic Keys
        • Secret Key Provisioning
        • Guest Deployment
    • Serverless (FaaS)
      • Knative
  • Confidential Serverless Containers
    • Introduction
    • Intel SGX
      • Threat Model
      • Remote attestation verification
      • Secure Storage
        • HashiCorp's Vault
      • Architecture
        • Confidential Knative
        • Certificates
        • Session Management
      • Confidential Container Configuration
    • AMD SEV-SNP
      • Threat Model
      • Architecture
        • Network communication
        • KMS
        • Updates
        • Key rotation
      • Design Decision
  • Benchmarks
    • Hardware
    • Results
    • Architecture Comparison
  • Getting Started
    • FAQ
    • Intel SGX
    • AMD SEV-SNP
  • Glossary
    • SGX Glossary
    • AMD SEV-SNP Glossary
Powered by GitBook
On this page
  1. Confidential Serverless Containers
  2. AMD SEV-SNP

Architecture

In this section, whenever containers are mentioned, serverless containers are always included. A much less complicated architecture than SGX can be used here, but the trusted computing base is greatly increased. Here we can fully leverage the power of AMD SEV-SNP. Because the VM is secured by trusted computing and only the cloud customer has access to it, the entire control plane of Kubernetes can be deployed inside it. AMD SEV-SNP does not provide disk encryption out of the box, relying on third-party solutions. Disk encryption with integrity protection is a strict requirement here, as some trust is placed in Kubernetes' etcd store. The cloud provider can mount a disk to a virtual machine, which can be used to have persistent data across VM restarts.

Remote attestation is used to differentiate between different cVMs and only measures the cVM at startup. Containers, on the other hand, are deployed inside the cVM at runtime and are not covered by the measurement. Therefore, remote attestation does not identify a single application. In addition, etcd stores data on disk, including secrets and configurations. The task is to inject secrets into a container at startup without leaking them to an adversary. The best practice is to include them as environment variables. However, since the configuration containing these environment variables is stored unencrypted in etcd, an attacker can read it without disk encryption. Since hardcoding is not a solution either, we simply move the problem from the configuration to the image; disk encryption is a requirement for the cVM running the control plane.

Moreover, since the etcd is encrypted, the secret storage of Kubernetes can be used securely. This allows Knative's auto-tls to be enabled. If applications are startup time sensitive, I'd recommend turning it off because it takes a few seconds to get a certificate from Let's Encrypt. A wildcard certificate can be used by the Istio gateway by storing it in etcd. As mentioned in Confidential Knative, it is important that the Queue Proxy and the Activator are protected by confidential computing. If there are nodes in the cluster that are not confidential VMs, it is necessary to ensure that the Activators, Queue Proxies, KMS and serverless containers are only scheduled on confidential VMs, for example using the nodeselector pod specification. Since Activators and revisions can be deployed on-demand by the Autoscaler, the Autoscaler code would also need to be modified. Therefore, for simplicity, only trusted VMs should be in the cluster.

With disk encryption enabled, one could argue whether a KMS is really necessary. Disk encryption provides only data-at-rest encryption, and no access policies and permissions to secrets. Thus, anyone with access to the cVM file system could theoretically read all secrets at once. This could happen through vulnerabilities in the trusted computing base, such as serverless containers. To maximize security, a KMS should be used to control access to secrets.

Seperation of containers

The Kubernetes control plane and the KMS should both have their own confidential VM. It is important to ensure that nothing else is running in both confidential VMs to keep the TCB in the cVM small. If the KMS is in its own cVM, disk encryption isn't strictly necessary because the KMS already provides data-at-rest confidentiality and integrity.

In addition, one or more confidential VMs should be provided to deploy the serverless containers. Because of the separation, vulnerabilities within one cVM will not affect other cVMs. Each trusted VM should not blindly trust any other trusted VM in the cluster. Because isolation based on containers alone is weak, kata containers can be used as a container runtime for the serverless containers. However, a bare-metal machine is also mandatory since most cloud providers do not allow nested virtualization as mentioned above. Since the guest VM also has a hypervisor, the hypervisor can create another VM, which is called a nested VM. Without a bare-metal machine, only containerization can be used. However, there is a solution by using AMD SEV-SNP VMPL's to protect the guest OS. In the cVMs where the serverless containers are deployed, the guest OS and Kubelet should run on VMPL 1 and all serverless containers on VMPL 3. This provides additional hardware isolation to protect the guest OS from the serverless containers. That is, the guest OS runs on it's own vCPU and memory accesses from lower privileged VMPLs are denied. The guest OS is important to protect because it is a higher privileged software and can access all running serverless containers. In the case of the KMS and control plane, VMPLs are not as important because they are the only software running in the cVM and already have good isolation. Also, for further isolation from the guest OS, each serverless container should run under a Lib-OS.

PreviousThreat ModelNextNetwork communication

Last updated 1 year ago