Mistral AI on Amazon Bedrock

Break new ground with powerful foundation models from Mistral AI


Mistral AI models are transparent and customizable, appealing to enterprises that have compliance and regulatory requirements. These models are available as white-box solutions, making both weights and code sources available.
Mistral AI models are transparent and customizable. The models are available under the Apache 2.0 license, appealing to enterprises that have compliance and regulatory requirements.
Mistral AI models have an impressive inference speed and are optimized for low latency. The models also have a low memory requirement and have high throughput for their respective sizes (7B, 8x7B).
Mistral AI models strike a remarkable balance between cost-effectiveness and performance. The use of sparse mixture of experts (MoE) makes Mistral AI’s LLMs efficient, affordable, and scalable while controlling compute costs.
Drive insights for your business by quickly and easily fine-tuning models with your custom data to address specific tasks and achieve compelling performance.

Meet Mistral AI

Mistral AI is on a mission to push AI forward. It’s cutting-edge models reflect the company's ambition to become the leading supporter of the generative AI community, and elevate publicly available models to state-of-the-art performance.

Use cases

Mistral AI models extract the essence from lengthy articles so you quickly grasp key ideas and core messaging.

Mistral AI models deeply understand the underlying structure and architecture of text, organize information within text, and help focus attention on key concepts and relationships.

The core AI capabilities of understanding language, reasoning, and learning allow Mistral AI models to handle question answering with more human-like performance. The accuracy, explanation abilities, and versatility of Mistral AI models make them very useful for automating and scaling knowledge sharing.

Mistral AI models have an exceptional understanding of natural language and code-related tasks, which is essential for projects that need to juggle computer code and regular language. Mistral AI models can help generate code snippets, suggest bug fixes, and optimize existing code, speeding up your development process.

Model versions

Mistral Large

Mistral AI’s most advanced large language model, Mistral Large is a cutting-edge text generation model with top-tier reasoning capabilities. Its precise instruction-following abilities enables application development and tech stack modernization at scale.

Max tokens: 32K

Languages: Natively fluent in English, French, Spanish, German, and Italian

Supported use cases: precise instruction following, text summarization, translation, complex multilingual reasoning tasks, math and coding tasks including code generation

Mistral 7B

A 7B dense Transformer, fast-deployed and easily customizable. Small, yet
powerful for a variety of use cases.

Max tokens: 8K

Languages: English

Supported use cases: Text summarization, structuration, question answering,
and code completion

Mixtral 8X7B

A 7B sparse Mixture-of-Experts model with stronger capabilities than Mistral AI
7B. Uses 12B active parameters out of 45B total.

Max tokens: 32K

Languages: English, French, German, Spanish, Italian

Supported use cases: Text summarization, structuration, question answering,
and code completion