Overview
This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next.
Cosmos Reason excels at navigating the long tail of diverse scenarios of the physical world with spatial-temporal understanding. Cosmos Reason is post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. It uses chain-of-thought reasoning capabilities to understand world dynamics without human annotations.
Given a video and a text prompt, the model first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses.
Cosmos Reason can be used for robotics and physical AI applications, including:
Data curation and annotation: Enable developers to automate high-quality curation and annotation of massive, diverse training datasets. Robot planning and reasoning Act as the brain for deliberate, methodical decision-making in a robot vision language action (VLA) model. Now robots such as humanoids and autonomous vehicles can interpret environments and given complex commands, break them down into tasks and execute them using common sense, even in unfamiliar environments. Video analytics AI agents. Extract valuable insights and perform root-cause analysis on massive volumes of video data. These agents can be used to analyze and understand recorded or live video streams across city and industrial operations.
The model is ready for commercial use.
Highlights
- Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM. Network Architecture: Qwen2.5-VL-7B-Instruct.
- Cosmos-Reason-7B is post-trained based on Qwen2.5-VL-7B-Instruct and follows the same model architecture.
- Number of model parameters: Cosmos-Reason1-7B: Vision Transformer (ViT): 675.76M (675,759,104) Language Model (LLM): 7.07B (7,070,619,136) Other components (output projection layer): 545.00M (544,997,376) Computational Load: Cumulative Compute: 3.2603016e+21 FLOPS Estimated Energy and Emissions for Model Training: Total kWh = 16658432 Total Emissions (tCO2e) = 5380.674
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.12xlarge Inference (Batch) Recommended | Model inference on the ml.g5.12xlarge instance type, batch mode | $1.00 |
ml.g6e.xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.xlarge instance type, real-time mode | $1.00 |
ml.g5.24xlarge Inference (Batch) | Model inference on the ml.g5.24xlarge instance type, batch mode | $1.00 |
ml.g5.48xlarge Inference (Batch) | Model inference on the ml.g5.48xlarge instance type, batch mode | $1.00 |
ml.g5.12xlarge Inference (Real-Time) | Model inference on the ml.g5.12xlarge instance type, real-time mode | $1.00 |
ml.g5.24xlarge Inference (Real-Time) | Model inference on the ml.g5.24xlarge instance type, real-time mode | $1.00 |
ml.g5.48xlarge Inference (Real-Time) | Model inference on the ml.g5.48xlarge instance type, real-time mode | $1.00 |
ml.g6e.2xlarge Inference (Real-Time) | Model inference on the ml.g6e.2xlarge instance type, real-time mode | $1.00 |
ml.g6e.4xlarge Inference (Real-Time) | Model inference on the ml.g6e.4xlarge instance type, real-time mode | $1.00 |
ml.g6e.8xlarge Inference (Real-Time) | Model inference on the ml.g6e.8xlarge instance type, real-time mode | $1.00 |
Vendor refund policy
No refund
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Additional details
Inputs
- Summary
Given a video and a text prompt, Cosmos-Reason-1-7B first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses. It accepts JSON requests via the /invocations API, where the image or video content is provided as a Base64-encoded data URI.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
model | Required. The specific model name, e.g., "nvidia/cosmos-reason1-7b". | Type: String | Yes |
messages | Conversation history, typically containing a single "user" message. | Type: Array of Objects | Yes |
messages[].content | Must contain an object with "type": "image_url or video_url" and an "image_url or video_url" object. The image/video data can be provided via a Base64-encoded data URI (e.g., data:image/png;base64,...) for local files (especially air-gapped deployment), or as a direct public URL to the online image/video file. | Type: Array of Objects | Yes |
max_tokens | The maximum number of tokens to generate in the response. | Type: Integer | No |
Resources
Support
Vendor support
Free support via NVIDIA NIM Developer Forum:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.