TwelveLabs in Amazon Bedrock (coming soon)

Unlock the full potential of enterprise video assets

Introducing TwelveLabs

TwelveLabs uses multimodal foundation models (FMs) to bring humanlike understanding to video data. The company's FMs understand what is happening in videos, including actions, objects, and background sounds, allowing developers to create applications that can search through videos, classify scenes, summarize, and extract insights with precision and reliability.

State-of-the-art video understanding models

Marengo 2.7

Get fast, context-aware results that reveal exactly what you’re looking for in your videos—moving beyond basic tags into a whole new dimension of multimodal understanding.

Pegasus 1.2

Transform your videos through the power of language. Generate everything you need from video—from concise summaries and captivating highlights to effective hashtags and customized reports. Uncover deeper insights and unlock entirely new possibilities with your content.
 

Benefits

Use Cases

TwelveLabs in Amazon Bedrock overview

With Marengo and Pegasus in Amazon Bedrock, you can use TwelveLabs’s models to build and scale generative AI applications without having to manage underlying infrastructure. You can also access a broad set of capabilities while maintaining complete control over your data, enterprise-grade security, and cost control features that are essential for responsibly deploying AI at scale.

Model versions

Marengo 2.7

Video embedding model proficient at performing tasks such as search and classification, enabling enhanced video understanding.

Learn more


Pegasus 1.2

Video language model that can generate text based on your video data.

Learn more