Mistral Large 3

Mistral AI · Mistral Large

Mistral's flagship European multimodal model with long context and competitive enterprise API economics.

Part of Mistral Large family
Type
multimodal
Context
256K tokens
Max Output
256K tokens
Status
current
Input
$0.5/1M tok
Output
$1.5/1M tok
API Access
Yes
License
Mistral Research License
europe multimodal reasoning function-calling long-context enterprise
Released December 2025 · Updated March 6, 2026

Overview

Freshness note: Model capabilities, limits, and pricing can change quickly. This profile is a point-in-time snapshot last verified on February 26, 2026.

Mistral Large 3 is Mistral AI’s flagship large model for enterprise-grade assistant and agent workloads. Mistral’s current docs describe it as a state-of-the-art, open-weight, general-purpose multimodal model with 41B active parameters and 675B total parameters.

In Mistral’s lineup, Large 3 is the high-capability endpoint, while smaller variants focus on latency and cost efficiency.

Capabilities

Mistral Large 3 is especially strong for:

  • Long-context language tasks over large documentation sets.
  • Tool and function-calling workflows in production assistants.
  • Multimodal interactions, including text-image style workflows.
  • Structured enterprise tasks: extraction, synthesis, and procedural guidance.
  • Cost-sensitive high-quality generation compared with some frontier-priced alternatives.

It is often used as a “quality tier” in routing setups with Mistral Medium/Small handling bulk traffic.

Technical Details

Mistral’s current model page publishes:

  • 256K context length.
  • Open-weight multimodal positioning with 41B active parameters and 675B total parameters.
  • Native support for modern chat/completions-style orchestration patterns.
  • Strong compatibility with function-calling tool ecosystems.

Mistral’s published model card emphasizes max context length rather than always exposing a separate max-generation ceiling per model. For consistency in this profile schema, maxOutput reflects that published upper-bound context limit. For teams with strict governance requirements, version pinning and automated regression checks are still important because model aliases can shift behavior over time.

Pricing & Access

Mistral platform pricing for Large 3 (per 1M tokens):

  • Input: $0.50
  • Output: $1.50

Access options:

  • Mistral API (La Plateforme)
  • Cloud and enterprise integration paths (including major cloud partners)

At this pricing level, Large 3 can be an attractive default high-capability model for many enterprise workloads that cannot sustain higher frontier token costs.

Best Use Cases

Choose Mistral Large 3 for:

  • Enterprise assistants with strict cost-performance constraints.
  • EU-oriented deployments requiring strong regional vendor alignment.
  • Tool-heavy workflows with structured output requirements.
  • Long-context analysis where 256K capacity is operationally useful.

It is less ideal when you need the absolute latest frontier benchmark peak regardless of spend. It is also less ideal for minimal-complexity chat where smaller Mistral variants can deliver similar user value at significantly lower latency and cost.

Comparisons

  • GPT-5.4 (OpenAI): GPT-5.4 often leads in top-end frontier capability breadth; Mistral Large 3 is frequently more cost-favorable for enterprise throughput.
  • Claude Opus 4.6 (Anthropic): Opus is premium and highly capable on hard reasoning; Mistral offers a strong value proposition with lower token rates.
  • Gemini 3.1 Pro Preview (Google): Gemini has broader native multimodal span and a larger context window; Mistral Large 3 is simpler to position as a cost-efficient high-capability EU alternative.