Result description
In CONFIDENTIAL6G, we developed a full Confidential Computing stack for secure AI and data processing across cloud environments. A key contribution was a modular Linux-based Hardware Abstraction Layer (HAL) atop Intel TDX and AMD SEV-SNP, with minimized TCB using a custom Buildroot Linux and integration of CoCoNUS SVSM to isolate workloads inside CVMs. We implemented remote/local attestation agents for private and public clouds, supported by Linux IMA/IMS. Secure boot was enabled via OVMF firmware for full chain-of-trust. We contributed to the IETF RATS WG with prototype attestation exchanges and reference measurements, and aligned efforts with the Linux Foundation Confidential Computing Consortium (LF CCC) to help define standards for confidential containers. These results underpin CUBE-AI by enabling confidential LLM inference, attested AI deployment, and runtime enforcement of secure policies.
Addressing target audiences and expressing needs
- Grants and Subsidies
- Collaboration
- Venture Capital
CUBE-AI enables secure AI inference by running models inside Trusted Execution Environments (TEEs), ensuring sensitive data and proprietary models remain confidential—even during execution—while sustaining high performance. Its uniqueness lies in combining confidential computing with scalable AI deployment, protecting IP and user data alike. By sharing this result, we aim to accelerate trustworthy AI adoption in sectors such as healthcare, finance, and telecom, targeting cloud/edge providers and AI developers.
- Public or private funding institutions
- Research and Technology Organisations
- Private Investors
R&D, Technology and Innovation aspects
Building on CONFIDENTIAL6G (TRL3), CUBE-AI delivers a TRL7+ confidential LLM platform with CVMs, CoCo, vTPM, attested TLS, HAL with Buildroot, SVSM, and IMA/IMS. Supports TDX, SEV-SNP, and H100 GPUs. Features a secure AI proxy, guardrails, and policy enforcement for auditable, attested inference via SaaS UI, targeting sensitive domains.
CUBE-AI scales across SaaS, private cloud, and on-prem environments. Its modular, confidential-by-design architecture—built on containers, TEEs, and attestation—supports GPU acceleration, secure multi-user access, and cost-efficient expansion, enabling strong unit economics and digital sovereignty.
Yes, the result is replicable. CUBE-AI is built using open-source components, cloud-native orchestration, and standardized Confidential Computing technologies (e.g. CoCo, SVSM, RATS), allowing it to be redeployed in public cloud, private cloud, and on-premise setups. Its modular design supports adaptation to new use-cases and verticals without reengineering the full stack, enabling fast replication across domains.
Yes. The sustainability of CUBE-AI is ensured by its open architecture, modular Confidential Computing design, and flexible deployment models (SaaS, private cloud, on-prem). It aligns with long-term EU priorities such as digital sovereignty, green computing, and trusted AI. Its business model—based on subscriptions, enterprise services, and open innovation—allows continued growth while reducing vendor lock-in and promoting ecosystem reuse.
Result submitted to Horizon Results Platform by ULTRAVIOLET CONSULT DOO
