The EU AI Act high-risk obligations become enforceable on 2 August 2026, with penalties up to €15M or 3% of global annual turnover. VoltageGPU runs every confidential AI agent inside Intel TDX hardware enclaves with per-request ECDSA attestation, providing direct evidence for Article 12 logging, Article 14 human oversight, Article 15 robustness and Article 32 GDPR confidentiality. EU controller (VOLTAGE EI, France, SIREN 943 808 824), native RGPD Article 28 DPA available without negotiation.
Annex III high-risk systems we cover
AI used in employment screening, recruitment and worker management
Credit scoring and creditworthiness assessments
Insurance underwriting (life and health)
Critical infrastructure operation
Justice administration and democratic processes
Law enforcement and migration management
Biometric identification and categorisation
How VoltageGPU compares to OpenAI and Mistral on EU AI Act readiness
OpenAI processes prompts on US infrastructure with limited per-request attestation, complicating Article 32 confidentiality and Article 12 evidence collection. Mistral hosts in EU but on standard VMs without hardware-sealed memory encryption, meaning operators can be compelled to read prompts under court order. VoltageGPU runs Qwen3-235B-TEE, DeepSeek-R1-TEE and other TEE models inside Intel TDX, where AES-256 memory keys are fused into the CPU and even VoltageGPU operators cannot extract user data. This is the strictest reading of Article 32 confidentiality available in 2026.
Trends driving 2026 compliance demand
Gartner expects 40% of enterprise applications to embed AI agents by end of 2026. 54% of IT leaders cite AI governance as a top risk in 2026 surveys. McKinsey calls 2026 the year of sovereign AI. Forrester projects €1.5T of EU sovereign AI spend over the decade. Regulated industries — legal, finance, healthcare, public sector — cannot deploy agentic AI without confidential compute evidence aligned with the EU AI Act Annex III timeline.
Enforcement: 2 August 2026 · €15M / 3% turnover penalty
EU AI Act compliance for agentic AI — ready for August 2026.
Article 12 logging, Article 14 human oversight, Article 15 robustness, Article 32 confidentiality. Sealed in Intel TDX, hosted in France, attested per request.
VOLTAGE EI · French controller · SIREN 943 808 824 · native RGPD Article 28 DPA · per-request ECDSA attestation.
2 August 2026. €15M or 3% of global annual turnover. Whichever is higher.
The EU AI Act entered force on 1 August 2024 with staged enforcement. Prohibited practices became enforceable 2 February 2025. General-purpose AI obligations applied from 2 August 2025. The full high-risk regime — Article 6, Annex III, Article 12 logging, Article 14 oversight, Article 15 robustness — becomes enforceable on 2 August 2026. After that date, deployers are personally exposed: not just the model provider, the company that uses the AI on regulated workflows.
Up to €35M or 7% of global turnover for prohibited practices · €15M or 3% for high-risk system non-compliance (Articles 5, 10, 13, 14, 15) · €7.5M or 1% for incorrect information to authorities. Liability falls on the deployer, not just the model provider.
Are you in scope?
High-risk AI under Article 6 and Annex III.
Annex III lists eight domains where AI systems are presumed high-risk regardless of risk assessment. If your agent touches one of these, the August 2026 obligations apply. The EU AI Office published clarifying guidance in early 2026 confirming that legal research assistants, credit scoring agents and underwriting copilots fall in scope.
Education and vocational training (admissions, scoring)
Employment, recruitment, worker management
Access to essential services (credit, insurance, public benefits)
Law enforcement and predictive policing
Migration, asylum and border control
Justice administration and democratic processes
What we ship for the four critical articles
Article 12, 14, 15 and 32 — covered by hardware, not paperwork.
Most vendors will hand you a 60-page policy PDF. We give you signed attestation quotes per request, UI controls in the agent shell and AES-256 memory encryption fused into the CPU. Audit-ready by construction, not by promise.
Article 12
Automatic event logging
WHAT THE LAW REQUIRES
High-risk systems must automatically record events relevant to identifying risks at the national level and substantial modifications, throughout the lifetime of the system.
WHAT VOLTAGEGPU PROVIDES
Every inference call inside our Intel TDX enclave produces an ECDSA-signed attestation quote bound to the request ID, enclave measurement, model version and timestamp. Tamper-evident by cryptographic construction.
Per-request signed quote, verifiable on /trust
Bound to model version and enclave measurement
Configurable retention (30 / 90 / 365 days)
Exportable as JSON-LD for audit submission
Article 14
Human oversight
WHAT THE LAW REQUIRES
High-risk systems must be designed so they can be effectively overseen by natural persons during the period in which the AI system is in use, including the ability to intervene or interrupt the system.
WHAT VOLTAGEGPU PROVIDES
Agent shell exposes pre-execution review, step-by-step approval, hard interrupt and rollback. Approver identity logged into the same attested event stream as the inference call.
Pre-execution review of agent plan
Per-step approval mode for high-stakes workflows
Hard interrupt with state preservation
Approver identity bound to attestation log
Article 15
Accuracy, robustness, cybersecurity
WHAT THE LAW REQUIRES
High-risk systems must achieve appropriate levels of accuracy, robustness and cybersecurity throughout their lifecycle, including resilience against attempts to alter use or behaviour through exploitation of vulnerabilities.
WHAT VOLTAGEGPU PROVIDES
Hardware sealing through Intel TDX provides AES-256 memory encryption, NVIDIA Protected PCIe between CPU and GPU, and per-request attestation. The threat model assumes a malicious hypervisor and still keeps the workload sealed.
AES-256 memory encryption (CPU-fused keys)
NVIDIA Protected PCIe for CPU↔GPU
Hypervisor and host OS outside the trust boundary
Side-channel and supply-chain risks blocked by design
Article 32 GDPR
Confidentiality of processing
WHAT THE LAW REQUIRES
The controller and processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including the confidentiality of processing.
WHAT VOLTAGEGPU PROVIDES
Provider-blind processing: VoltageGPU operators are technically incapable of reading user prompts or training data. This is the strictest defensible reading of Article 32(1)(b) confidentiality available in 2026.
Encryption keys fused inside the CPU at boot
Operators cannot dump RAM or attach a debugger
Native RGPD Article 28 DPA — no negotiation
EU controller (VOLTAGE EI, France)
VoltageGPU vs OpenAI vs Mistral
Compared head-to-head on each Article.
US frontier labs offer policy-level commitments. EU model providers offer EU hosting on standard VMs. Only Intel TDX-sealed inference gives the deployer hardware-grade evidence aligned with the August 2026 obligations.
ObligationOpenAIMistralVoltageGPU
Article 12 — per-request signed logPolicy logs, no per-request quoteEU-hosted logs, no quoteECDSA quote per request
Article 32 — provider-blind processingOperator can read promptsOperator can read promptsAES-256 CPU-fused, operator-blind
EU jurisdiction (no FISA 702 / CLOUD Act)US controllerEU controllerEU controller (VOLTAGE EI, France)
Native RGPD Art. 28 DPASCCs negotiableAvailableNative, no negotiation
Questions buyers ask
EU AI Act FAQ.
When does the EU AI Act start applying to my AI agents?
Prohibited AI practices have been enforceable since 2 February 2025. General-purpose AI obligations applied from 2 August 2025. The full high-risk regime under Article 6 and Annex III becomes enforceable on 2 August 2026, with penalties up to €15M or 3% of global annual turnover for non-compliance with Articles 12, 14 and 15.
Is my AI agent classified as high-risk?
If your agent makes or substantially supports decisions in employment screening, credit scoring, insurance underwriting, critical infrastructure, education, law enforcement, migration, justice administration or biometric identification, it is presumed high-risk under Annex III. The EU AI Office published clarifying guidance in early 2026 confirming that legal research assistants, credit scoring agents and underwriting copilots fall in scope.
How does VoltageGPU help with Article 12 logging?
Every inference call inside our Intel TDX enclaves produces a signed ECDSA attestation quote bound to the request ID, the enclave measurement, the model used and a timestamp. These attestation logs are tamper-evident, cryptographically verifiable, and cover the operational logging requirements of Article 12. Approver identity is captured for Article 14 oversight when configured.
Does Intel TDX satisfy Article 15 cybersecurity?
Article 15(4) names cybersecurity as a design and lifecycle requirement. Hardware-sealed inference inside Intel TDX — with AES-256 memory encryption, NVIDIA Protected PCIe and per-request attestation — is the strongest defensible evidence available in 2026. The threat model assumes a malicious hypervisor and still keeps the workload sealed, blocking the side-channel and supply-chain risks that 2026 EU AI Office guidance flags.
What if I already use OpenAI or Anthropic?
You can continue, provided you can produce Article 12 logs, Article 14 oversight controls and Article 15 cybersecurity evidence on a per-request basis to a national authority. If you cannot, the deployer — not OpenAI — pays the penalty. Most regulated buyers route high-risk workflows through VoltageGPU and keep general workflows on existing copilots.
Explore the compliance hub
Keep going.
This is the pillar. Each spoke goes deep on one regulatory framework or one Article.