Posted On: Apr 18, 2024
Starting today, the next generation of the Meta Llama models, Llama 3, is now available via Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models, built-in algorithms, and pre-built solutions to help you quickly get started with ML. You can deploy and use Llama 3 foundation models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.
Llama 3 comes in a range of parameter sizes — 8B and 70B — and can be used to support a broad range of use cases, with improvements in reasoning, code generation, and instruction following. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance. In addition, Meta’s improved post-training procedures substantially reduced false refusal rates, improved alignment, and increased diversity in model responses. You can now derive the combined advantages of Llama 3 performance and MLOps controls with Amazon SageMaker features such as SageMaker Pipelines, SageMaker Debugger, or container logs. The model will be deployed in an AWS secure environment under your VPC controls, helping ensure data security.
Llama 3 foundation models are available today in SageMaker JumpStart initially in US East (Ohio), US West (Oregon), US East (N. Virginia), Asia Pacific (Tokyo), and Europe (Ireland) AWS Regions. To get started with Llama 3 foundation models via SageMaker JumpStart, see documentation and blog.