What Is Confidential Computing and How ChatLock Uses It
Confidential computing protects data while it's being processed — not just when it's stored or in transit. Learn how ChatLock leverages this technology to keep your AI conversations truly private.
5 min readThe Missing Layer of Data Protection
Traditional security focuses on two states: data at rest (encrypted on disk) and data in transit (encrypted over the network). But there's a third state that's often overlooked — data in use.
When an AI model processes your prompt, the text must be decrypted in memory so the model can read it. In a conventional setup, anyone with access to the server — an engineer, an attacker, or even the cloud provider — could theoretically read that data during processing.
Confidential computing eliminates this window of exposure.
How Trusted Execution Environments Work
At the heart of confidential computing is the Trusted Execution Environment (TEE), a hardware-enforced secure zone inside the processor. Here's the simplified flow:
- Isolation — the TEE creates an encrypted memory region that the host operating system cannot access.
- Encrypted processing — your data enters the enclave already encrypted and is only decrypted inside the protected boundary.
- Attestation — the hardware produces a cryptographic proof that the enclave is running the expected code, unmodified. You can verify this proof independently.
Even if someone gains root access to the server, they cannot see or tamper with the data inside the TEE.
Why This Matters for AI
Large language models are hungry for compute, which means they typically run on powerful cloud GPUs. Without confidential computing, you're trusting the cloud operator, the service provider, and every engineer with access to production systems to handle your data responsibly.
With confidential-computing GPUs, the trust model shrinks to the hardware itself. The data is protected by silicon, not by policy.
How ChatLock Implements It
ChatLock routes every AI inference request through confidential-computing GPUs:
- Your prompt is encrypted end-to-end from your browser to the TEE.
- The model processes your prompt inside the enclave where even ChatLock's own infrastructure cannot observe it.
- The response is encrypted back to you before it leaves the enclave.
- No logs, no copies, no training data — once the response is delivered, the enclave's memory is wiped.
The cryptographic attestation for each session is available so you can verify these guarantees yourself. This is what we call Verifiable Privacy — you don't have to trust us; you can prove it.
The Bigger Picture
Confidential computing is still relatively new in the AI space, but it's rapidly becoming the standard for privacy-sensitive workloads in finance, healthcare, and government. ChatLock brings this same level of protection to everyday AI conversations.
As AI becomes more capable and more embedded in our lives, the question isn't whether your data should be protected during processing — it's why it wasn't protected from the start.