Listing Thumbnail

    NVIDIA Cosmos Reason-1-7B

     Info
    Sold by: NVIDIA 
    Deployed on AWS
    NVIDIA Cosmos Reason: an open, customizable, 7B-parameter reasoning vision language model (VLM) for physical AI and robotics - enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding and common sense to understand and act in the real world.

    Overview

    This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next.

    Cosmos Reason excels at navigating the long tail of diverse scenarios of the physical world with spatial-temporal understanding. Cosmos Reason is post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. It uses chain-of-thought reasoning capabilities to understand world dynamics without human annotations.

    Given a video and a text prompt, the model first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses.

    Cosmos Reason can be used for robotics and physical AI applications, including:

    Data curation and annotation: Enable developers to automate high-quality curation and annotation of massive, diverse training datasets. Robot planning and reasoning Act as the brain for deliberate, methodical decision-making in a robot vision language action (VLA) model. Now robots such as humanoids and autonomous vehicles can interpret environments and given complex commands, break them down into tasks and execute them using common sense, even in unfamiliar environments. Video analytics AI agents. Extract valuable insights and perform root-cause analysis on massive volumes of video data. These agents can be used to analyze and understand recorded or live video streams across city and industrial operations.

    The model is ready for commercial use.

    Highlights

    • Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM. Network Architecture: Qwen2.5-VL-7B-Instruct.
    • Cosmos-Reason-7B is post-trained based on Qwen2.5-VL-7B-Instruct and follows the same model architecture.
    • Number of model parameters: Cosmos-Reason1-7B: Vision Transformer (ViT): 675.76M (675,759,104) Language Model (LLM): 7.07B (7,070,619,136) Other components (output projection layer): 545.00M (544,997,376) Computational Load: Cumulative Compute: 3.2603016e+21 FLOPS Estimated Energy and Emissions for Model Training: Total kWh = 16658432 Total Emissions (tCO2e) = 5380.674

    Details

    Sold by

    Delivery method

    Latest version

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    NVIDIA Cosmos Reason-1-7B

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (20)

     Info
    Dimension
    Description
    Cost/host/hour
    ml.g5.12xlarge Inference (Batch)
    Recommended
    Model inference on the ml.g5.12xlarge instance type, batch mode
    $1.00
    ml.g6e.xlarge Inference (Real-Time)
    Recommended
    Model inference on the ml.g6e.xlarge instance type, real-time mode
    $1.00
    ml.g5.24xlarge Inference (Batch)
    Model inference on the ml.g5.24xlarge instance type, batch mode
    $1.00
    ml.g5.48xlarge Inference (Batch)
    Model inference on the ml.g5.48xlarge instance type, batch mode
    $1.00
    ml.g5.12xlarge Inference (Real-Time)
    Model inference on the ml.g5.12xlarge instance type, real-time mode
    $1.00
    ml.g5.24xlarge Inference (Real-Time)
    Model inference on the ml.g5.24xlarge instance type, real-time mode
    $1.00
    ml.g5.48xlarge Inference (Real-Time)
    Model inference on the ml.g5.48xlarge instance type, real-time mode
    $1.00
    ml.g6e.2xlarge Inference (Real-Time)
    Model inference on the ml.g6e.2xlarge instance type, real-time mode
    $1.00
    ml.g6e.4xlarge Inference (Real-Time)
    Model inference on the ml.g6e.4xlarge instance type, real-time mode
    $1.00
    ml.g6e.8xlarge Inference (Real-Time)
    Model inference on the ml.g6e.8xlarge instance type, real-time mode
    $1.00

    Vendor refund policy

    No refund

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Amazon SageMaker model

    An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.

    Deploy the model on Amazon SageMaker AI using the following options:
    Deploy the model as an API endpoint for your applications. When you send data to the endpoint, SageMaker processes it and returns results by API response. The endpoint runs continuously until you delete it. You're billed for software and SageMaker infrastructure costs while the endpoint runs. AWS Marketplace models don't support Amazon SageMaker Asynchronous Inference. For more information, see Deploy models for real-time inference  .
    Deploy the model to process batches of data stored in Amazon Simple Storage Service (Amazon S3). SageMaker runs the job, processes your data, and returns results to Amazon S3. When complete, SageMaker stops the model. You're billed for software and SageMaker infrastructure costs only during the batch job. Duration depends on your model, instance type, and dataset size. AWS Marketplace models don't support Amazon SageMaker Asynchronous Inference. For more information, see Batch transform for inference with Amazon SageMaker AI  .

    Additional details

    Inputs

    Summary

    Given a video and a text prompt, Cosmos-Reason-1-7B first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses. It accepts JSON requests via the /invocations API, where the image or video content is provided as a Base64-encoded data URI.

    Input MIME type
    application/json
    { "model": "nvidia/cosmos-reason1-7b", "messages": [ { "role": "system", "content": "Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>." }, { "role":"user", "content":[ { "type": "text", "text": "What is in this image?" }, { "type": "image_url", "image_url": { "url": image_data_url } } ] } ], "temperature": 0.6, "max_tokens": 200, }
    No sample data for Batch job, can use same as above

    Input data descriptions

    The following table describes supported input data fields for real-time inference and batch transform.

    Field name
    Description
    Constraints
    Required
    model
    Required. The specific model name, e.g., "nvidia/cosmos-reason1-7b".
    Type: String
    Yes
    messages
    Conversation history, typically containing a single "user" message.
    Type: Array of Objects
    Yes
    messages[].content
    Must contain an object with "type": "image_url or video_url" and an "image_url or video_url" object. The image/video data can be provided via a Base64-encoded data URI (e.g., data:image/png;base64,...) for local files (especially air-gapped deployment), or as a direct public URL to the online image/video file.
    Type: Array of Objects
    Yes
    max_tokens
    The maximum number of tokens to generate in the response.
    Type: Integer
    No

    Support

    Vendor support

    Free support via NVIDIA NIM Developer Forum:

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.