Meta’s Llama 3.1 8B and 70B models are now available for fine-tuning in Amazon Bedrock
Amazon Bedrock now supports fine-tuning for Meta’s Llama 3.1 70B and 8B models, enabling businesses to customize these generative AI models with their own data. The Llama 3.1 models offer significant improvements over earlier versions, including a 128K context length—16 times greater than Llama 3—allowing you to access and process larger volumes of information from lengthy text passages. You can use fine-tuning to adapt the Llama 3.1 models for domain-specific tasks, enhancing model performance for specialized use cases.
According to Meta, Llama 3.1 models excel in multilingual dialogue across eight languages and demonstrate improved reasoning. The Llama 3.1 70B model is ideal for content creation, conversational AI, language understanding, R&D, and enterprise applications. It excels at tasks such as text summarization, text classification, sentiment analysis, nuanced reasoning, language modeling, dialogue systems, code generation, and following instructions. The Llama 3.1 8B model is best suited for scenarios with limited computational power and resources, excelling at text summarization, classification, sentiment analysis, and language translation, while ensuring low-latency inferencing. By fine-tuning Llama 3.1 models in Amazon Bedrock, businesses can further enhance their capabilities for specialized applications, improving accuracy and relevance without needing to build models from scratch.
You can fine-tune Llama 3.1 models in Amazon Bedrock in the US West (Oregon) AWS Region. For pricing, visit the Amazon Bedrock pricing page. To get started, see the Bedrock user guide.