Meta Llama in Amazon Bedrock

Build the future of AI with Llama

Introducing Llama 3.1

Llama 3.1 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including a 128K context length, improved reasoning supported by eight languages, and Llama 3.1 405B—the largest publicly available foundation model.

Llama 3.1 405B is the largest openly available LLM designed for developers, researchers, and businesses to build, experiment, and responsibly scale generative AI ideas. Llama 3.1 405B sets a new standard in AI, and is ideal for enterprise level applications, research and development, synthetic data generation, and model distillation. The model excels at general knowledge, long-form text generation, machine translation, enhanced contextual understanding, advanced reasoning and decision making, better handling of ambiguity and uncertainty, increased creativity and diversity, steerability, math, tool use, multilingual translation, and coding.

Llama 3.1 70B is ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. The model excels at text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.

Llama 3.1 8B is ideal for limited computational power and resources, faster training times, and edge devices. The model excels at text summarization, text classification, sentiment analysis, and language translation.

Benefits

128K context length quadruples the capacity of its predecessor, allowing it to capture even more nuanced relationships in data.
Llama models are trained on 15 trillions of tokens from online public data sources to better comprehend language intricacies.
Llama 3.1 is multilingual and supports eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
Amazon Bedrock's managed API makes using Llama models easier than ever. Organizations of all sizes can access the power of Llama without worrying about the underlying infrastructure. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy the generative AI capabilities of Llama into your applications using the AWS services you are already familiar with. This means you can focus on what you do best—building your AI applications.

Meet Llama

For over the past decade, Meta has been focused on putting tools into the hands of developers, and fostering collaboration and advancements among developers, researchers, and organizations. Llama models are available in a range of parameter sizes, enabling developers to select the model that best fits their needs and inference budget. Llama models in Amazon Bedrock open up a world of possibilities because developers don't need to worry about scalability or managing infrastructure. Amazon Bedrock is a very simple turnkey way for developers to get started using Llama.

Use cases

Llama models excel at language nuances, contextual understanding, and complex tasks like translation and dialogue generation, and can handle multi-step tasks effortlessly. Some examples of use cases that Llama models excel at include text summarization and accuracy, text classification, sentiment analysis and nuance reasoning, language modeling, dialog systems, code generation, and following instructions.

Model versions

Meta Llama 3.1 8B

Ideal for limited computational power and resources, faster training times, and edge devices.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Coming soon

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation.

Read the blog

Meta Llama 3.1 70B

Ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. 

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Coming soon

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation.

Read the blog

Meta Llama 3.1 405B

Ideal for enterprise level applications, research and development, synthetic data generation and model distillation.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Coming soon

Supported use cases: General knowledge, long-form text generation, machine translation, enhanced contextual understanding, advanced reasoning and decision making, better handling of ambiguity  and uncertainty, increased creativity and diversity, steerability, math, tool use, multilingual translation, and coding.

Read the blog

Meta Llama 3 8B

Ideal for limited computational power and resources, faster training times, and edge devices.

Max tokens: 8K

Languages: English

Fine-tuning supported: No

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation

Read the blog

Meta Llama 3 70B

Ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. 

Max tokens: 8K

Languages: English

Fine-tuning supported: No

Supported use cases: Text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.

Read the blog

Meta Llama 2 13B

Fine-tuned model in the parameter size of 13B. Suitable for smaller-scale tasks such as text classification, sentiment analysis, and language translation.

Max tokens: 4K

Languages: English

Fine-tuning supported: Yes

Supported use cases: Assistant-like chat

Read the blog

Meta Llama 2 70B

Fine-tuned model in the parameter size of 70B. Suitable for larger-scale tasks such as language modeling, text generation, and dialogue systems.

Max tokens: 4K

Languages: English

Fine-tuning supported: Yes

Supported use cases: Assistant-like chat

Read the blog

Nomura uses Llama models from Meta in Amazon Bedrock to democratize generative AI

 

Aniruddh Singh, Nomura's Executive Director and Enterprise Architect, outlines the financial institution’s journey to democratize generative AI firm-wide using Amazon Bedrock and Llama models from Meta. Amazon Bedrock provides critical access to leading foundation models like Llama, enabling seamless integration. Llama offers key benefits to Nomura, including faster innovation, transparency, bias guardrails, and robust performance across text summarization, code generation, log analysis, and document processing. 

TaskUs revolutionizes customer experiences using Llama models from Meta in Amazon Bedrock

TaskUs, a leading provider of outsourced digital services and next-generation customer experience to the world’s most innovative companies, helps its clients represent, protect, and grow their brands. Its innovative TaskGPT platform, powered by Amazon Bedrock and Llama models from Meta, empowers teammates to deliver exceptional service. TaskUs builds tools on TaskGPT that leverage Amazon Bedrock and Llama for cost-effective paraphrasing, content generation, comprehension, and complex task handling.