
Sold by: Allen Institute for AI
Open data
|
Deployed on AWS
14k QA pairs over 1.7K paragraphs, split between train (10k QAs), development (1.6k QAs) and a hidden test partition (1.7k QAs).
Overview
14k QA pairs over 1.7K paragraphs, split between train (10k QAs), development (1.6k QAs) and a hidden test partition (1.7k QAs).
Features and programs
Open Data Sponsorship Program
This dataset is part of the Open Data Sponsorship Program, an AWS program that covers the cost of storage for publicly available high-value cloud-optimized datasets.
Pricing
This is a publicly available data set. No subscription is required.
How can we make this page better?
We'd like to hear your feedback and ideas on how to improve this page.
Legal
Content disclaimer
Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.
Delivery details
AWS Data Exchange (ADX)
AWS Data Exchange is a service that helps AWS easily share and manage data entitlements from other organizations at scale.
Open data resources
Available with or without an AWS account.
- How to use
- To access these resources, reference the Amazon Resource Name (ARN) using the AWS Command Line Interface (CLI). Learn more
- Description
- Project data files in a public bucket
- Resource type
- S3 bucket
- Amazon Resource Name (ARN)
- arn:aws:s3:::ai2-public-datasets
- AWS region
- us-west-2
- AWS CLI access (No AWS account required)
- aws s3 ls --no-sign-request s3://ai2-public-datasets/
Resources
Vendor resources
Support
Contact
Managed By
How to cite
Reasoning Over Paragraph Effects in Situations (ROPES) was accessed on DATE from https://registry.opendata.aws/allenai-ropes .
License
Similar products
Medical LLM reasoning model optimized for clinical reasoning, designed to elaborate the thought processes, considering multiple hypotheses, evaluating evidence systematically, and explaining conclusions transparently.
Vision language model that excels in understanding the physical world using structured reasoning on videos or images.
NVIDIA Cosmos Reason: an open, customizable, 7B-parameter reasoning vision language model (VLM) for physical AI and robotics - enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding and common sense to understand and act in the real world.
RikAI is a suite of purpose-built foundation models for document processing and multi-modal reasoning. The APIs are ready to integrate into your existing infrastructure and are built for accuracy and effectiveness.
The open version of ESM3; a frontier multimodal generative model that reasons over the sequences, structures, and functions of proteins