Confidential Computing and Application Security

What Is Confidential Computing?

Confidential Computing is a security approach that protects data while it is actively being processed, not just when it is stored on disk or transmitted over a network. Traditional security models focus on encryption at rest and encryption in transit, but they leave a critical gap: once data is loaded into memory for computation, it typically exists in plaintext and can be accessed by privileged software.

In a conventional system without CC:

  • Data at rest is protected with disk encryption
  • Data in transit is protected with TLS
  • Data in use is exposed in system memory, where it can be accessed by the operating system, hypervisor, or anyone with sufficient privileges

Confidential computing closes this gap by extending protection to data in use, ensuring sensitive workloads remain protected even while they are running.


How Confidential Computing Works

At the core of confidential computing are hardware-based Trusted Execution Environments (TEEs) built directly into modern CPUs. These TEEs create isolated execution contexts—often referred to as enclaves or confidential VMs—that are cryptographically protected from the rest of the system.

When a workload runs inside a TEE:

  • Memory contents are encrypted in RAM using hardware-managed keys
  • Encryption and decryption happen transparently within the CPU
  • The host operating system and hypervisor cannot read or tamper with the data

Even if the host is compromised, the confidentiality and integrity of the workload remain intact. This dramatically reduces the trust placed in infrastructure and privileged software.


Isolation and Threat Model

Confidential Computing is designed to protect against powerful insider and infrastructure-level threats, including:

  • Malicious or compromised operating systems
  • Cloud administrators with elevated privileges
  • Hypervisor-level attacks
  • Malware attempting memory inspection or injection

By shifting trust from software to hardware, confidential computing enforces strong isolation boundaries that are difficult to bypass without physical access to the CPU itself.


Remote Attestation: Verifying Trust Before Sharing Secrets

A critical capability of confidential computing is remote attestation. Attestation allows a workload to produce cryptographic proof that it is:

  • Running inside a genuine TEE
  • Executing expected code (measured by a cryptographic hash)
  • Enforcing specific security policies

Before secrets, credentials, or sensitive data are released, an external system can verify this proof. Only if the attestation succeeds are secrets injected into the environment. This enables zero-trust deployment models, where trust is established dynamically rather than assumed.


Why This Matters in Practice

Remote attestation allows organizations to:

  • Run sensitive workloads on shared cloud infrastructure
  • Use third-party services without exposing raw data
  • Limit access to secrets to verified execution contexts
  • Produce auditable evidence of how and where code ran

This moves security from a “trust the platform” model to a “verify the runtime” model.


Common TEE Technologies

Several major TEE implementations are used in confidential computing today:

  • Intel SGX – Enclave-based isolation at the application level, offering strong security but with memory and programming constraints
  • Intel TDX – VM-based confidential computing, designed for cloud-scale workloads
  • AMD SEV / SEV-SNP – VM-level memory encryption with integrity protection, widely adopted by cloud providers
  • ARM TrustZone – Secure and normal world separation, commonly used in mobile and embedded systems
  • NVIDIA Hopper – Offers GPU based isolation and encryption in conjunction with CPU based technologies

Each technology makes different tradeoffs between performance, isolation granularity, and operational complexity, but all share the same core goal: protecting data in use.


Summary

Confidential Computing fundamentally changes the cloud trust model by ensuring that sensitive code and data remain protected even during execution. By combining hardware-enforced isolation with cryptographic attestation, it enables secure processing of high-value workloads in environments that were previously considered untrusted. This makes it a foundational technology for modern security-sensitive systems, including application security, data analytics, and AI-driven workloads.


Confidential Computing and Application Security

We explored how Confidential Computing can be impactful for Application Security. Many of the workflows we analyzed require deep access to sensitive assets—source code, secrets, and internal context—creating risk when run on shared or third-party infrastructure. Confidential Computing reduces this risk by protecting workloads while they execute.


1. Secure access to sensitive source code

Modern Security tools often require access to:

  • Full proprietary source code
  • Configuration files and build artifacts
  • Internal architecture and dependency details

Confidential computing enables:

  • Analysis of code without exposing it outside the trusted execution environment
  • No plaintext source code visible to the host OS, hypervisor, or cloud operator
  • Reduced insider and infrastructure-level threat

This allows organizations to run advanced Application Security workflows without expanding their trust boundary.


2. Safe vulnerability scanning and remediation

Application Security systems may:

  • Scan closed-source or regulated codebases
  • Run SAST, DAST, or dependency analysis
  • Produce remediation guidance or patches

When executed inside a confidential environment:

  • Source code and intermediate findings never leave protected memory
  • Analysis results are tied to a verified execution context
  • Outputs can be cryptographically signed and audited

This increases confidence in both the findings and the process that produced them.


3. Protecting secrets used in workflows

Application Security tooling frequently requires:

  • Repository credentials
  • Cloud or artifact registry keys
  • CI/CD and scanning tokens

With Confidential Computing:

  • Secrets are released only after successful attestation
  • Credentials exist solely inside encrypted memory
  • Secrets cannot be dumped or reused outside the trusted environment

This significantly reduces the risk of credential theft and supply-chain compromise.


4. Trusted execution in CI/CD pipelines

Confidential Computing enables zero-trust Application Security pipelines:

  1. CI/CD launches a confidential workload
  2. The environment attests its hardware, code, and policy
  3. Scoped secrets and repo access are injected
  4. Security tasks are performed
  5. Results are emitted as signed artifacts

This provides:

  • Non-repudiable audit trails
  • Strong guarantees about what ran and where
  • Compliance-friendly security automation

Application Security use cases

  • Secure SAST analysis of closed-source applications
  • Automated vulnerability remediation with temporary, scoped access
  • Third-party Security tools operating on sensitive enterprise code
  • Bug bounty or coordinated disclosure workflows without source exposure
  • Protecting AI inputs and outputs, API keys, secrets and embeddings
  • Protecting AI models themselves – weights, fine tuned data and others
  • Verifiable AI execution with enhanced trust and security policies

Cloud Provider Support

We explored the Confidential Computing support in all our partner cloud providers – AWS, Microsoft Azure and Google Cloud. Each of the cloud provider varies in the approach to protect applications.

AWS: Strong Isolation with Enclave-Based Security

AWS approaches confidential computing with a strong focus on isolation and least privilege, most notably through EC2 Confidential Instances (AMD SEV-SNP) and Nitro Enclaves. These technologies ensure that application code, source artifacts, and secrets remain encrypted in memory and inaccessible to the host OS, hypervisor, or AWS operators. This makes AWS well suited for Application Security workflows that process highly sensitive or regulated codebases.

A key differentiator is AWS’s attestation-first model. Using the Nitro Attestation Service, Security workloads can prove their integrity before being granted access to secrets stored in AWS KMS or Secrets Manager. This enables patterns where security scans or remediation tasks receive credentials only for the duration of a verified run, reducing blast radius and supply-chain risk. AWS integrates cleanly with existing CI/CD systems, making it a strong fit for organizations prioritizing enclave-style security boundaries.


Microsoft Azure: Enterprise-Grade Confidential Computing

Azure offers one of the most comprehensive confidential computing platforms, centered around Azure Confidential VMs (supporting both AMD SEV-SNP and Intel TDX) and Azure Confidential Containers. These services are designed to integrate deeply with enterprise identity, governance, and compliance frameworks, which is particularly valuable for Application Security teams operating in regulated environments.

Azure’s Attestation Service and Managed Identity + Key Vault combination allows Security workloads to securely prove their runtime integrity and receive secrets without embedding trust in the underlying infrastructure. This makes it easier to run security analysis, vulnerability scanning, or remediation pipelines in a zero-trust model. Azure is often favored where strong policy enforcement, auditability, and enterprise integration are as important as raw isolation.


Google Cloud: Confidential-by-Default Infrastructure

Google Cloud takes a more infrastructure-native approach with Confidential VMs, GKE Confidential Nodes, and a broader philosophy of “secure by default.” Memory encryption using AMD SEV protects Application Security workloads transparently, whether they run as VMs or containerized services, making Google Cloud especially attractive for Kubernetes-centric security platforms.

While Google Cloud’s attestation model is less enclave-centric than AWS or Azure, it integrates Shielded VMs, vTPM, Cloud KMS, and Secret Manager to provide strong guarantees around workload integrity and secret handling. This works well for large-scale, automated Application Security scanning and dependency analysis where workloads need to scale horizontally while still protecting proprietary code and findings.


When to choose which provider

With varying offerings, we believe each cloud provider is suitable for one or more categories as listed below:

  • Maximum isolation / zero trust → Azure or AWS
  • Kubernetes-native scanning agents → Google Cloud or Azure
  • Enclave-style secret handling → AWS Nitro Enclaves
  • Enterprise compliance & governance → Azure

Conclusion

At Pervaziv AI, we have security experts who have spent many years working on Confidential Computing and other security/AI domains. We intend to bring our research work to fruition soon!

References

Scroll to Top