Private CrewAI Deployment
Sealed in Intel TDX
CREWAI · MULTI-AGENT · CONFIDENTIAL

Private CrewAI Deployment —
Sovereign Multi-Agent Workflows.

Drop-in custom LLM for CrewAI. Every reasoning step routes through Intel TDX enclaves we operate in the EU. Existing crew code runs unchanged.

Multi-agent crews fan out token usage across many LLM calls — each carries proprietary context. Keep that traffic provider-blind without rewriting the crew.

Install

Standard CrewAI plus the OpenAI SDK that backs the LLM wrapper. No fork, no patch.

Shell · pip install
BASH
# Install CrewAI and the OpenAI SDK that backs the LLM wrapper
pip install crewai crewai-tools openai

Custom LLM pointing at app.voltagegpu.com/v1

Construct one LLM instance, pass it to every Agent. Sequential, hierarchical, async kickoffs and async tool calls all work.

Python · CrewAI · Sequential legal crew
PYTHON
# Sovereign legal crew — contract analyst + clause drafter
from crewai import Agent, Crew, Task, LLM, Process

# One LLM, every agent in the crew uses the confidential endpoint
confidential_llm = LLM(
    model="openai/Qwen3-235B-A22B-Instruct-2507-TEE",
    base_url="https://api.voltagegpu.com/v1",
    api_key="vg-...",  # https://app.voltagegpu.com/settings/api-keys
    temperature=0.2,
    max_tokens=4096,
)

analyst = Agent(
    role="Senior Contract Analyst",
    goal="Identify regulatory and commercial risks in vendor contracts.",
    backstory="EU-trained counsel specialized in DORA / RGPD / NIS2.",
    llm=confidential_llm,
    verbose=False,
)

drafter = Agent(
    role="Clause Drafter",
    goal="Propose redlines that address each finding.",
    backstory="Has drafted MSAs for 200+ regulated SaaS rollouts.",
    llm=confidential_llm,
    verbose=False,
)

review = Task(
    description="Review {contract} and list every Article 28 RGPD gap.",
    expected_output="A bulleted list of clause-level findings with severity.",
    agent=analyst,
)

redline = Task(
    description="Draft a redline for each finding produced by the analyst.",
    expected_output="Markdown redlines grouped by clause section.",
    agent=drafter,
    context=[review],
)

crew = Crew(
    agents=[analyst, drafter],
    tasks=[review, redline],
    process=Process.sequential,
    verbose=False,
)

result = crew.kickoff(inputs={"contract": open("msa.txt").read()})
print(result)

Mix models per agent — fast worker + reasoning manager

Use a cheap fast model for screening agents and a deep-reasoning model for the manager and modeler agents. Both LLMs hit the same confidential endpoint.

Python · CrewAI · Hierarchical finance crew
PYTHON
# Finance deal-screening crew, three agents, hierarchical process
from crewai import Agent, Crew, Task, LLM, Process

reasoning_llm = LLM(
    model="openai/DeepSeek-R1-0528-TEE",
    base_url="https://api.voltagegpu.com/v1",
    api_key="vg-...",
    temperature=0.0,
)

fast_llm = LLM(
    model="openai/Qwen3-32B-TEE",
    base_url="https://api.voltagegpu.com/v1",
    api_key="vg-...",
    temperature=0.1,
)

screener = Agent(role="Deal Screener", goal="Filter opportunities", llm=fast_llm, backstory="MD-led screening discipline.")
modeler  = Agent(role="Model Builder",  goal="Build base+stress cases", llm=reasoning_llm, backstory="Sector-specialist analyst.")
ic_writer = Agent(role="IC Memo Writer", goal="Produce IC-ready memos", llm=reasoning_llm, backstory="Investment committee veteran.")

crew = Crew(
    agents=[screener, modeler, ic_writer],
    tasks=[
        Task(description="Screen {opportunity}", expected_output="GO / NO-GO with rationale", agent=screener),
        Task(description="Build base+stress model", expected_output="Model JSON + key drivers", agent=modeler),
        Task(description="Draft IC memo", expected_output="2-page memo", agent=ic_writer),
    ],
    process=Process.hierarchical,
    manager_llm=reasoning_llm,
)

memo = crew.kickoff(inputs={"opportunity": open("teaser.pdf", "rb").read()})

Regulated industry crews

Legal team crew

Contract analyst, clause drafter, jurisdiction reviewer, redline aggregator. Long-context Qwen3-235B-TEE for full MSAs.

LegalRGPD

Finance crew

Deal screener, model builder, due-diligence summarizer, IC memo writer. DeepSeek-R1-TEE for reasoning-heavy steps.

M&AIC memos

Compliance crew

Control mapper, evidence collector, audit drafter. Pair with a confidential MCP server for control-mapping tools.

DORANIS2ISO 27001

Healthcare crew

Medical record summarizer, coding assistant, prior-auth drafter. EU-hosted enclaves keep PHI inside sovereign control.

HIPAAGDPR

Why this is confidential

Every agent prompt sealed in TDX

The OpenAI-compatible endpoint terminates TLS inside the trust domain. Agent prompts decrypt only inside the enclave.

AES-256 memory encryption

CPU-fused keys protect RAM at runtime — the hypervisor cannot inspect memory holding contracts, deal data, or PHI.

Per-request attestation

Each completion can be paired with an ECDSA-signed report identifying the TDX module and base model version.

Zero retention, zero training

Crew prompts are never logged or reused. Native RGPD Article 28 DPA, EU jurisdiction (VOLTAGE EI, France).

Pricing

Pay-per-token at the same rates as standard inference. No per-agent license, no platform fee. Mix models freely across agents inside a crew.

Qwen3-32B-TEEWorker agents, screening
in $0.50 / 1Mout $1.50 / 1M
Qwen3-235B-A22B-Instruct-2507-TEELong-context analysts, drafting
in $1.20 / 1Mout $3.50 / 1M
DeepSeek-R1-0528-TEEReasoning manager, IC memos
in $1.80 / 1Mout $5.40 / 1M

Volume contracts available beyond 100M tokens / mo.

EXPLORE FURTHER

Bring Your Own Agent

Parent pillar

Confidential MCP server

Tool calls in TDX

Sovereign agentic AI

Architectural overview

API reference

OpenAPI spec

SDK reference

Python / TS / Go

All integrations

Frameworks & tools

Ship a sovereign crew this afternoon

Generate an API key, swap base_url, kick off your first confidential crew.

Create account