# Confidential Computing

Many cloud providers rely on operational assurance — policies and administrative controls — to protect customer data. Phoeniqs takes a different approach: technical assurance through cryptographic and hardware-enforced controls that make access to plaintext data technically infeasible, even for our own administrators.

Our philosophy is to protect data across its entire lifecycle: in transit, at rest, and in use.


# Operational vs. Technical Assurance

Operational vs Technical Assurance
Operational vs Technical Assurance

Protection Layer Description
Data in Transit All network traffic is encrypted with TLS 1.3. Data cannot be intercepted in motion.
Data at Rest Physical and logical storage encryption with BYOK/KYOK. Phoeniqs cannot decrypt your data.
Data in Use Workloads run inside hardware-based TEEs (AMD SEV, Intel TDX, IBM Hyper Protect). Admins cannot access plaintext data during computation.
Tamper-proof Attestation Cryptographic attestation validates the environment, guaranteeing only trusted code and hardware are used.

# Confidential Containers (CoCo) for AI

Confidential Containers (CoCo) leverage trusted execution environments (TEEs) to isolate sensitive workloads from the host OS, other tenants, and the cloud provider.

To address confidential AI, IBM Research has proposed a robust architecture for protecting LLM inference workflows end-to-end:

Confidential AI Workflow
Confidential AI Workflow

Component Protection
Requestor-to-Proxy TLS/SSL encrypted input and output tokens
Proxy POD Confidential Container with AMD SEV-SNP memory protection
Inference Server POD Confidential Container + NVIDIA H100 GPU Confidential Mode
Proxy-to-Inference POD-to-POD transparent encryption (upstreamed by IBM Research)
Model Storage Proprietary models stored encrypted; decrypted only inside vLLM POD

# Confidential AI for Model as a Service

Phoeniqs extends confidential computing to GPU workloads — ensuring prompts, model weights, and inference data remain encrypted and isolated during execution, using IBM LinuxONE, Hyper Protect virtualization, and OpenShift AI.

Confidential AI for Model as a Service
Confidential AI for Model as a Service


# Architecture & Data Flow

Client encrypts and sends request Client encrypts the request and sends it via TLS to the AI Gateway enclave.

Request is processed inside the enclave The request is decrypted inside the enclave, policy-checked, and re-encrypted for the Model Serving enclave.

Model executes and returns output The model executes inside the enclave. Output is re-encrypted and returned to the client.

Minimal metadata stored Only minimal metadata leaves enclaves, stored per the Data Processing Agreement (DPA).


# Security Summary

Capability Detail
Attestation Remote attestation via Trustee — CPU, GPU, containers, and devices
Key Management HSM-backed BYOK/KYOK — customers own and control all keys
Encryption in Transit TLS 1.3
Encryption at Rest AES-256
Encryption in Use TEE memory encryption and runtime isolation
Data Residency All processing and storage exclusively in Switzerland
Admin Access Zero-access — platform admins cannot see plaintext
Compliance ISO 27001, Swiss nFADP, GDPR

# Phoeniqs vs. Hyperscalers

Feature Phoeniqs Typical Hyperscaler
Data Residency Exclusively Switzerland U.S./EU; subject to CLOUD Act
Encryption Keys Client-controlled BYOK/KYOK Provider-managed; often escrowed
Data in Use TEEs + GPU confidential mode (H100/H200) CPU-only enclaves; limited GPU
Attestation CPU, GPU, containers, devices Basic VM/CPU only
Operational Model Zero-access Admins retain potential access
AI Workload Support LLMs, GPUs, encrypted models, pod-to-pod encryption Limited AI/LLM optimization
Open-Source Active CoCo contributor Closed implementations

# Notes