Overview
Rerankers are neural networks that predict the relevancy scores between a query and documents and rank them based on the scores. They are used to refine search results in semantic search/retrieval systems and retrieval-augmented generation (RAG). rerank-2.5-lite is a reranker optimized for both latency and quality, delivering a 7.16% improvement in retrieval accuracy over Cohere Rerank v3.5 across 93 datasets. It also outperformed Cohere Rerank v3.5 by 10.36% on the Massive Instructed Retrieval Benchmark (MAIR). The model supports a combined context length of 32K tokens per query-document pair, including up to 8K tokens for the query, enabling more accurate retrieval over longer documents. Additionally, rerank-2.5-lite supports instruction following, allowing users to guide relevance scoring through natural language prompts. Learn more about rerank-2.5-lite here: https://blog.voyageai.com/2025/08/11/rerank-2-5
Highlights
- Optimized for quality, delivering a 7.94% improvement in retrieval accuracy over Cohere Rerank v3.5 across 93 datasets.
- Supports a combined context length of 32K tokens per query-document pair, including up to 8K tokens for the query, enabling more accurate retrieval over longer documents.
- Supports instruction following, allowing users to guide relevance scoring through natural language prompts.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.2xlarge Inference (Batch) Recommended | Model inference on the ml.g5.2xlarge instance type, batch mode | $0.00 |
ml.p4d.24xlarge Inference (Real-Time) Recommended | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $35.92 |
ml.g5.xlarge Inference (Real-Time) | Model inference on the ml.g5.xlarge instance type, real-time mode | $3.03 |
ml.g5.2xlarge Inference (Real-Time) | Model inference on the ml.g5.2xlarge instance type, real-time mode | $2.82 |
ml.g5.4xlarge Inference (Real-Time) | Model inference on the ml.g5.4xlarge instance type, real-time mode | $4.06 |
ml.g5.8xlarge Inference (Real-Time) | Model inference on the ml.g5.8xlarge instance type, real-time mode | $6.12 |
ml.g6.xlarge Inference (Real-Time) | Model inference on the ml.g6.xlarge instance type, real-time mode | $2.25 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $2.44 |
ml.g6.4xlarge Inference (Real-Time) | Model inference on the ml.g6.4xlarge instance type, real-time mode | $3.31 |
ml.g6.8xlarge Inference (Real-Time) | Model inference on the ml.g6.8xlarge instance type, real-time mode | $5.04 |
Vendor refund policy
Refunds to be processed under the conditions specified in EULA. Please contact aws-marketplace@mongodb.com for further assistance.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
MongoDB is excited to announce the initial release of rerank-2.5-lite
Additional details
Inputs
- Summary
- query: str - The query as a string. Maximum of 4K tokens.
- documents: List[str] - The documents to be reranked as a list of strings. Maximum of 1K documents.
- top_k: int, optional (default=None) - The number of most relevant documents to return. If not specified, the reranking results of all documents will be returned.
- truncation: bool, optional (default=True) - True: Truncates. False: raises error if any given text exceeds the context length.
- Limitations for input type
- Maximum context length: 32,000 tokens Truncation is required if inputs exceed this limit
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
query | The query as a string. | The query can contain a maximum of 8000 tokens. | Yes |
documents | The documents to be reranked as a list of strings. | The number of documents cannot exceed 1000.
The sum of the number of tokens in the query and the number of tokens in any single document cannot exceed 32000.
The total number of tokens is defined as the number of query tokens x the number of documents + sum of the number of tokens in all documents. | Yes |
top_k | The number of most relevant documents to return. If not specified, the reranking results of all documents will be returned. | Type: Integer | No |
truncation | Whether to truncate the input to satisfy the "context length limit" on the query and the documents. | Default value: True
Type: Boolean | No |
Support
Vendor support
Please email us at aws-marketplace@mongodb.com for inquiries and customer support.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.