Bittensor Protocol

Powered by Bittensor

VoltageGPU is your gateway to Bittensor’s decentralized infrastructure. GPU compute from Subnet 51 (Lium), AI inference from Subnet 64 (Chutes), fine-tuning from Subnet 56 (Gradients), and confidential compute from Subnet 4 (Targon) — unified in one platform, with simple billing and no complexity.

Live Neural Network Visualization

What is Bittensor?

Decentralized GPU Marketplace

A peer-to-peer network where independent GPU operators compete to provide compute power. No single company controls supply or pricing — the market sets fair rates through open competition.

Incentive-Driven Pricing

TAO token staking and on-chain rewards create a self-correcting marketplace. Operators are incentivized to offer the best performance at the lowest cost, driving prices below centralized alternatives.

No Vendor Lock-In

Your workloads run on a global network of interchangeable providers. Switch freely without migration costs, proprietary APIs, or contractual commitments tying you to a single cloud vendor.

Four Subnets, One Platform

VoltageGPU aggregates compute from four specialized Bittensor subnets. All data below is fetched live from the Bittensor blockchain via Taostats API.

SN...
Live

Loading...

Fetching data...

...
Miners
...
Validators
...
Neurons
SN...
Live

Loading...

Fetching data...

...
Miners
...
Validators
...
Neurons
SN...
Live

Loading...

Fetching data...

...
Miners
...
Validators
...
Neurons
SN...
Live

Loading...

Fetching data...

...
Miners
...
Validators
...
Neurons

Lium — GPU Compute

Miners Provide Hardware

Independent operators register GPU machines (executors) on Subnet 51. Each machine lists its GPU model, VRAM, bandwidth, and hourly price. Miners set their own rates — creating an open, competitive marketplace.

Validators Verify Hardware

Validators connect to miner machines and run Proof-of-Compute benchmarks. They verify GPU type, VRAM, and performance are genuine. Unreliable miners are scored lower and earn fewer TAO rewards.

You Rent via VoltageGPU

When you deploy a pod on VoltageGPU, we provision a containerized environment on a verified miner’s hardware via the Lium API. You get SSH access, Jupyter, exposed ports — billed hourly from your VoltageGPU balance.

Chutes — AI Inference

Miners Serve AI Models

Miners on Subnet 64 dedicate GPU clusters to serve AI models. Each “chute” is a containerized model (vLLM, diffusion, whisper, etc.) deployed across the decentralized network and ready for inference.

GraVal Hardware Verification

Chutes uses GraVal — a custom CUDA library that performs Proof-of-VRAM by running seeded matrix multiplications requiring 95% of GPU memory. This produces a hardware-level signature, making GPU spoofing impossible.

OpenAI-Compatible API

VoltageGPU exposes an OpenAI-compatible endpoint backed by Chutes’ network. Your requests are routed to the fastest available miner. 140+ models, streaming support, per-token billing — drop-in replacement for any LLM API.

Gradients — Decentralized Fine-Tuning

Upload Your Dataset

Upload a dataset, select a base model (Llama, Mistral, SDXL, etc.), and Gradients handles the rest. No hyperparameter tuning, no ML expertise needed. 3,000+ users already training on the network.

Miners Compete to Train

Multiple miners compete to produce the best fine-tuned version of your model. AutoML optimization across the decentralized network — you get the best result, significantly cheaper than AWS or Google Cloud.

Deploy the Best Model

The winning fine-tuned model is delivered to you. Deploy it instantly on VoltageGPU’s inference API (Subnet 64) or download the weights. Full pipeline: data → train → serve, all on Bittensor.

Targon — Verified Inference & Confidential Compute

Verified Inference

Subnet 4 (Targon) ensures that every inference result is cryptographically verified. Miners must prove they actually ran the computation — no shortcuts, no faked outputs. Built by Manifold Labs, one of Bittensor’s most established teams.

Intel TDX Enclaves

Targon miners run GPU workloads inside Intel TDX (Trust Domain Extensions) secure enclaves. Your data is encrypted not just in transit and at rest, but during processing. The hardware itself guarantees that nobody — not even the machine operator — can see your data.

HIPAA & SOC2 Compatible

The combination of verified inference and hardware-level encryption makes Targon suitable for regulated industries. Healthcare (HIPAA), finance (SOC2), and government workloads can run on decentralized compute without compromising compliance.

Platform at a Glance

•••
GPUs Online
•••
AI Models
•••
Active Miners
4
Subnets

Start Using Decentralized GPU Cloud

Access the world’s most affordable GPU compute, powered by Bittensor.