VOLTAGEGPU

NVIDIA A100 80GB

The gold standard for AI training and inference. Enterprise-grade GPU compute in seconds.

...from / gpu / hour
...available now
<60sdeploy time
1x+multi-GPU

Technical Specifications

GPU Memory
80 GB
HBM2e
Bandwidth
1,555 GB/s
Memory Bandwidth
CUDA Cores
6,912
Ampere
Tensor Cores
432
3rd Gen
FP32
19.5 TFLOPS
Single Precision
Tensor Perf
312 TFLOPS
Mixed Precision
NVLink
600 GB/s
GPU Interconnect
PCIe
PCIe 4.0
Host Interface
Multi-Instance GPU (MIG)Structural SparsityTF32 PrecisionBF16 Support

Ideal Use Cases

Large Language Models

  • GPT-3 fine-tuning
  • LLaMA 70B inference
  • Custom transformer models

Scientific Computing

  • Molecular dynamics
  • Climate modeling
  • Computational fluid dynamics

AI Training at Scale

  • Computer vision
  • Recommendation systems
  • Reinforcement learning

Performance Comparison

This GPU
A100 80GB
VRAM80 GB HBM2e
Bandwidth1,555 GB/s
FP3219.5 TFLOPS
Tensor312 TFLOPS
A100 40GB
VRAM40 GB HBM2
Bandwidth1,555 GB/s
FP3219.5 TFLOPS
Tensor312 TFLOPS
V100 32GB
VRAM32 GB HBM2
Bandwidth900 GB/s
FP3215.7 TFLOPS
Tensor125 TFLOPS

Multi-GPU Configurations

2x
160 GB VRAM39 TFLOPS FP32
4x
320 GB VRAM78 TFLOPS FP32
8x
640 GB VRAM156 TFLOPS FP32

FAQ

The 80GB variant doubles VRAM capacity, enabling training of larger models like LLaMA 70B without model parallelism. Both share identical compute specs.

Yes, A100 supports NVLink 3.0 at 600 GB/s per GPU, enabling efficient multi-GPU training with high-bandwidth interconnect.

PyTorch, TensorFlow, JAX, Transformers, CUDA 12.x, and cuDNN are pre-installed on all A100 templates.

VoltageGPU bills per second with no minimum commitment. Run for 5 minutes or 5 months — pay only for what you use.

Other GPUs

Ready to Deploy A100 80GB?

$5 free credit. No credit card required. Deploy in under 60 seconds.

99.9% Uptime Per-second billing Global network