Posted On: Jul 18, 2023
Starting today, Llama 2 foundation models from Meta are available in Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models, built-in algorithms, and pre-built solutions to help you quickly get started with ML. You can deploy and use Llama 2 foundation models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.
Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. It comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to generate more relevant responses. Developers can leverage Meta's Responsible Use Guide, outlining their best practices for responsibly building each layer of the stack of a GenAI product, and understand the importance of addressing risks associated with commercial use of LLMs.
You can now derive the combined advantages of Llama 2 performance and MLOps controls with SageMaker features such as SageMaker Pipelines, SageMaker Debugger, or container logs. The model will be deployed in an AWS secure environment under your VPC controls, helping ensure data security. Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Llama 2 foundation models are available today in SageMaker JumpStart initially in us-east 1 and us-west 2 regions. Please upgrade your SageMaker Studio environment to the latest version to discover these models. To get started with Llama 2 foundation models via SageMaker JumpStart, see documentation and blog.