For decades, cybersecurity leaders have repeated the mantra: “We must protect data at rest and in motion with good encryption.” But as attackers get smarter and insider threats rise, and we enable AI with increasingly rich live data, this approach falls short.
Data is never more vulnerable than when it’s in use. That’s exactly where confidential computing steps in to strengthen defenses and change the game.
Most organizations focus on encrypting data at rest or in transit, but they often overlook what happens when systems actively process data in memory.
For example, a government organization may want to feed live sensitive data into an AI model to detect real-time patterns that can drive outcomes. Applications reading data from encrypted databases will at some point use keys in memory to decrypt it, like a banking or tax application processing sensitive customer records. Attackers understand this gap well. Once inside, they can grab sensitive code, data, keys, and credentials, then move around undetected.
A recent CISA Red Team assessment highlighted this risk clearly. The CISA Red Team, a group that simulates real-world cyberattacks to uncover security gaps, extracted a decryption key from system memory using an open-source attack tool called KeeThief, allowing them to unlock an entire database.
Closing the gap that occurs when data is being processed is fundamental for organizations that want to prevent major breaches and maintain business continuity. That’s why more teams are turning to confidential computing.
Confidential computing protects code and data during processing inside modern CPUs by providing secure enclaves—isolated, hardware-protected environments that keep workloads confidential, even from cloud operators and privileged insiders. This is made possible through:
Together, these mechanisms prevent unauthorized access even if other parts of the system are compromised.
This approach builds on Zero Trust principles, which assume no user or device should be trusted by default and require continuous verification. Confidential computing takes this further by providing hardware-backed evidence that workloads are running in a protected environment before code executes or sensitive data is exposed.
Beyond preventing data leaks, confidential computing supports a stronger security posture that aligns with rising global expectations around privacy and sovereignty. As governments introduce stricter data residency and compliance mandates, organizations increasingly need verifiable assurance that sensitive workloads remain shielded at every stage, from storage to active use.
By ensuring data confidentiality throughout its entire lifecycle, organizations can strengthen their credibility, maintain data sovereignty, and build trust with customers, partners, and regulators. It also opens the door to new cross-border collaborations and secure multi-party data sharing initiatives, expanding business opportunities without compromising on control.
Confidential computing is not exotic special-purpose hardware either. Confidential CPU capabilities have been introduced into every major vendor’s offering in the last 5 years (e.g., Intel, AMD, ARM, NVIDIA), and all major cloud providers are enabled, and recent servers you have in your data center today are already likely equipped with the essential confidential features.
Today, confidential computing is being deployed across industries and critical systems, marking a new era in data security.