Language ModelMistral AIOpen SourceFastMoEEfficient

Mixtral 8x7B API

Mistral's pioneering MoE model matching GPT-3.5 performance at a fraction of the cost.

Parameters

46.7B (MoE, 12.9B active)

Context

32,768 tokens

Organization

Mistral AI

Pricing

$0.24

per 1M input tokens


$0.24

per 1M output tokens

Try Mixtral 8x7B for Free

Quick Start

Start using Mixtral 8x7B in minutes. VoltageGPU provides an OpenAI-compatible API — just change the base_url.

Python (OpenAI SDK)
pip install openai
from openai import OpenAI

client = OpenAI(
    base_url="https://api.voltagegpu.com/v1",
    api_key="YOUR_VOLTAGE_API_KEY"
)

response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=[
        {"role": "user", "content": "Write a concise summary of the key benefits of microservices architecture."}
    ],
    max_tokens=1024,
    temperature=0.5
)

print(response.choices[0].message.content)
cURL
Terminal
curl -X POST https://api.voltagegpu.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "mistralai/Mixtral-8x7B-Instruct-v0.1",
    "messages": [
      {"role": "user", "content": "Write a concise summary of the key benefits of microservices architecture."}
    ],
    "max_tokens": 1024,
    "temperature": 0.5
  }'

Pricing

ComponentPriceUnit
Input tokens$0.24per 1M tokens
Output tokens$0.24per 1M tokens

New accounts receive $5 free credit. No credit card required to start.


Capabilities & Benchmarks

Mixtral 8x7B matches or exceeds GPT-3.5 Turbo on most benchmarks: MMLU (70.6%), HellaSwag (84.4%), and ARC (66.4%). It handles English, French, Italian, German, and Spanish natively. The MoE architecture provides 6x faster inference than a comparable dense 70B model while maintaining similar quality. It supports instruction following, text generation, summarization, and basic reasoning.


About Mixtral 8x7B

Mixtral 8x7B is Mistral AI's groundbreaking Mixture-of-Experts model that revolutionized the open-source LLM landscape. With 46.7 billion total parameters but only 12.9 billion active per forward pass, it delivers performance matching Llama 2 70B and GPT-3.5 Turbo while being significantly faster and more cost-effective. The model uses 8 expert networks and a router that selects 2 experts per token, enabling efficient specialization across different types of knowledge and tasks.


Use Cases

💬

Cost-Efficient Chat

Build production chatbots with GPT-3.5-level quality at significantly lower cost.

✏️

Content Writing

Generate blog posts, marketing copy, and creative writing in multiple European languages.

Fast Inference

Serve real-time applications that need quick responses with high throughput.

📝

Text Summarization

Summarize articles, reports, and documents efficiently at scale.

🔄

Language Translation

Translate between English, French, German, Italian, and Spanish.


API Reference

Endpoint

POSThttps://api.voltagegpu.com/v1/chat/completions

Headers

AuthorizationBearer YOUR_VOLTAGE_API_KEYRequired
Content-Typeapplication/jsonRequired

Model ID

mistralai/Mixtral-8x7B-Instruct-v0.1

Use this value as the model parameter in your API requests.

Example Request

curl -X POST https://api.voltagegpu.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_VOLTAGE_API_KEY" \
  -d '{
    "model": "mistralai/Mixtral-8x7B-Instruct-v0.1",
    "messages": [
      {"role": "user", "content": "Write a concise summary of the key benefits of microservices architecture."}
    ],
    "max_tokens": 1024,
    "temperature": 0.5
  }'



Frequently Asked Questions

How does Mixtral 8x7B compare to GPT-3.5?

Mixtral 8x7B matches or exceeds GPT-3.5 Turbo on most benchmarks while being open source and cheaper. At $0.24/M tokens vs GPT-3.5's $0.50/M, it offers excellent value. It particularly excels at European language tasks and code generation.

What makes MoE architecture special?

MoE (Mixture-of-Experts) routes each token to only 2 of the 8 expert networks, meaning only 12.9B of the 46.7B parameters are active per token. This gives the model the knowledge capacity of a large model with the speed of a small one.

Is Mixtral 8x7B still competitive?

While newer models have surpassed Mixtral 8x7B on some benchmarks, it remains an excellent choice for production workloads where cost and speed are priorities. Its proven reliability and the efficiency of its MoE architecture make it a strong option for many use cases.

What context length does Mixtral 8x7B support?

Mixtral 8x7B supports a 32,768 token context window, sufficient for most production use cases including multi-turn conversations, document summarization, and code generation.


Start using Mixtral 8x7B today

Get $5 free credit when you sign up. No credit card required. Deploy in under 30 seconds with our OpenAI-compatible API.