Language ModelMetaHotOpen SourceEnterprise

Llama 3.3 70B API

Meta's most capable 70B model with 128K context, competing with models 5x its size.

Parameters

70B

Context

128,000 tokens

Organization

Meta

Pricing

$0.7

per 1M input tokens


$0.9

per 1M output tokens

Try Llama 3.3 70B for Free

Quick Start

Start using Llama 3.3 70B in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.

Python (OpenAI SDK)
pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.voltagegpu.com/v1",
    api_key="YOUR_VOLTAGE_API_KEY"
)

response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct",
    messages=[
        {"role": "system", "content": "You are a senior software engineer."},
        {"role": "user", "content": "Review this code and suggest improvements:\n\ndef fib(n):\n  if n <= 1: return n\n  return fib(n-1) + fib(n-2)"}
    ],
    max_tokens=2048,
    temperature=0.3
)

print(response.choices[0].message.content)
cURL
Terminal
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "meta-llama/Llama-3.3-70B-Instruct",
    "messages": [
      {"role": "system", "content": "You are a senior software engineer."},
      {"role": "user", "content": "Review this code and suggest improvements."}
    ],
    "max_tokens": 2048,
    "temperature": 0.3
  }'

Pricing

ComponentPriceUnit
Input tokens$0.7per 1M tokens
Output tokens$0.9per 1M tokens

New accounts receive $5 free credit. No credit card required to start.


Capabilities & Benchmarks

Llama 3.3 70B achieves strong results across benchmarks: MMLU (86.0%), HumanEval (88.4%), MATH (77.0%), and GSM8K (91.1%). It supports tool use, structured output (JSON mode), and multilingual generation in English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. With 128K context it can process entire codebases and long documents.


About Llama 3.3 70B

Llama 3.3 70B is Meta's most capable open-weight model in the 70B parameter class. It delivers performance competitive with much larger models including Llama 3.1 405B on many tasks. Built on Meta's latest Llama 3 architecture with grouped query attention (GQA), it supports a 128K context window and excels at instruction following, reasoning, coding, and multilingual tasks. The model was trained on over 15 trillion tokens of publicly available data and fine-tuned with RLHF for safe and helpful responses.


Use Cases

🏢

Enterprise Chatbots

Deploy production-grade conversational AI with strong safety guarantees and instruction following.

💻

Code Generation

Generate, review, and debug code across multiple languages with high accuracy.

📄

Document Processing

Summarize, extract information from, and analyze long documents with 128K context.

🌍

Multilingual Applications

Build applications serving users in 8+ languages with native-quality generation.

🔗

RAG Pipelines

Use as the generation component in Retrieval-Augmented Generation for knowledge-grounded responses.


API Reference

Endpoint

POSThttps://api.voltagegpu.com/v1/chat/completions

Headers

AuthorizationBearer YOUR_VOLTAGE_API_KEYRequired
Content-Typeapplication/jsonRequired

Model ID

meta-llama/Llama-3.3-70B-Instruct

Use this value as the model parameter in your API requests.

Example Request

curl -X POST https://api.voltagegpu.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "meta-llama/Llama-3.3-70B-Instruct",
    "messages": [
      {"role": "system", "content": "You are a senior software engineer."},
      {"role": "user", "content": "Review this code and suggest improvements."}
    ],
    "max_tokens": 2048,
    "temperature": 0.3
  }'



Frequently Asked Questions

How does Llama 3.3 70B compare to Llama 3.1 405B?

Llama 3.3 70B matches the performance of Llama 3.1 405B on many benchmarks while being significantly cheaper and faster to run. On MMLU it scores 86.0% vs 405B's 88.6%. For most practical use cases, the 70B model provides excellent quality at much lower cost.

Is Llama 3.3 70B free to use commercially?

Llama 3.3 70B is released under Meta's Llama 3.3 Community License, which allows commercial use for companies with fewer than 700 million monthly active users. Through VoltageGPU's API, you can use it immediately at $0.70/M input tokens with no licensing concerns.

What context window does Llama 3.3 70B support?

Llama 3.3 70B supports a 128,000 token context window, allowing it to process approximately 96,000 words or 300 pages of text in a single request.

Does Llama 3.3 70B support function calling?

Yes, Llama 3.3 70B supports tool use and function calling through the VoltageGPU API. You can define tools using the standard OpenAI function calling format.


Start using Llama 3.3 70B today

Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.