New gen AI innovations
At re:Invent, we launched new products, capabilities, and features to make it easier for companies of all sizes to adopt generative AI at scale.
Introducing Amazon Nova
Amazon Nova is a new generation of state-of-the-art foundation models with frontier intelligence and industry leading price performance.
Amazon Nova foundation models include:
- Amazon Nova Micro: text only model with the lowest latency responses at very low cost
- Amazon Nova Lite: very low-cost multimodal model for lightning fast processing of image, video, and text inputs
- Amazon Nova Pro: highly capable multimodal model with the best combination of accuracy, speed, and cost
- Amazon Nova Canvas: state-of-the-art image generation model
- Amazon Nova Reel: state-of-the-art video generation model
This image was generated using Amazon Nova Canvas with the prompt "a portrait of a happy corgi dog”.
Transform how work gets done throughout your organization
Amazon Q is a generative AI-powered assistant for accelerating software development and leveraging companies' internal data. It easily enables organizations to make generative AI securely accessible to everyone.
New capabilities include:
- Build and deploy faster with Amazon Q Developer agents that automate testing, create and maintain documentation, and automate code reviews
- Transform your large-scale migration and modernization projects across Java, Windows .NET, VMware, and Mainframe
- Access your Amazon Q assistant seamlessly within third-party applications with simple-to-deploy extensions for popular web browsers and productivity tools.
- Unite your structured and unstructured data sources with over 40 commonly used business tool connections
- Deliver more conversational, personalized, and intelligent self-service support for your end customers
Easily build and scale generative AI applications for your use case
Amazon Bedrock is a fully managed service offering a choice of high-performing foundation models (FMs) from leading AI companies, along with capabilities for building generative AI applications securely and responsibly all without having to manage underlying infrastructure. It enables easy evaluation of FMs, and advanced customization capabilities like Knowledge Bases, Guardrails, Agents, and Flows.
New capabilities include:
- Access to over 100 self-managed models on Amazon Bedrock Marketplace
- Specialized models for video generation
- Model Distillation for cost-effective, task-specific inference
- Routing and caching for cost-optimized generative AI apps
- Multi-agent collaboration for enhanced problem-solving
- Develop and customize application with Bedrock IDE in Amazon SageMaker Unified Studio.
Infrastructure to build and customize AI and ML models
AWS Trainium2-powered Amazon EC2 Trn2 Instances, our highest performing instances for deep learning and generative AI, deliver up to 30% more compute and high bandwidth memory at a lower price than the next most powerful EC2 instance. For the most demanding, state-of-the-art models that need more compute and memory than a single instance can deliver, Amazon EC2 Trn2 UltraServers offer the fastest training and inference performance on AWS. Explore Amazon EC2 Trn2 Instances
Three new Amazon SageMaker HyperPod innovations make it easier for customers to quickly get started with training some of today’s most popular publicly available models, save weeks of model training time with flexible training plans, and maximize compute resource utilization to reduce costs by up to 40%.
SageMaker AI customers can now easily and securely discover, deploy, and use fully managed generative AI and ML development applications from AWS partners, such as Comet, Deepchecks, Fiddler AI, and Lakera, directly in SageMaker Studio, giving them the flexibility to choose the tools that work best for them
Advancing trust in AI
Amazon has a long-standing commitment to responsibly building and using AI to foster trust and deliver long-term value to our customers. We prioritize responsible AI innovation, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the end-to-end AI lifecycle.
New tools and resources
- ISO 42001 accredited certification for core AI services
- Build trustworthy applications for high stakes use cases with Automated Reasoning checks in Amazon Bedrock Guardrails
- Evaluate FMs and RAG applications against key safety and accuracy criteria with human-like quality using LLM-as-a-Judge capability
- New AI Service Cards to enhance transparency for Amazon models
Generative AI on AWS
Organizations of all sizes and types are harnessing large language models (LLMs) and foundation models (FMs) to build generative AI applications that deliver new customer and employee experiences. With enterprise-grade security and privacy, access to industry-leading FMs, and generative AI-powered applications, AWS makes it easy to build and scale generative AI customized for your data, your use cases, and your customers.
Tools to build and scale generative AI applications
Innovate faster with new capabilities, a choice of industry-leading FMs, and infrastructure that pushes the envelope to deliver the highest performance while lowering costs.