
Overview
The Cohere Rerank v3.5 endpoint enables businesses to significantly improve search and retrieval-augmented generation systems. The model takes a query and lists potential relevant documents. Rerank v3.5 then returns the documents as a list sorted by semantic similarity to the provided query. As an intelligent cross-encoding AI mode it is able to understand the meaning behind enterprise data and user questions. This model can be implemented with just a few lines of code and delivers leading performance across over 100 languages. Rerank is uniquely capable of understanding complex information which requires reasoning. Rerank v3.5 can be added to existing systems to improve performance. Please note that as of July 2025 the minimum requirement to deploy this model are NVIDIA driver version - 535 and CUDA version - 12.2.
Highlights
- Rerank v3.5 is uniquely capable of understanding complex documents and queries. This leads to more accurate search results when user questions have multiple aspects and require reasoning. Rerank v3.5 also offers strong performance on semi-structured data such as Code, Tables, and JSON Documents. These attributes make the model ideal for global organizations within such as Finance, Healthcare, Energy, Government, Manufacturing.
- Rerank v3.5 can be added to existing search and retrieval-augmented generation (RAG) systems with just a few lines of code. This ease of implementation makes is simple to boost semantic understanding and improve search results. Rerank v3.5 is also highly efficient, in terms of throughput, and is capable of satisfying demanding requirements for large organizations.
- Rerank v3.5 offers leading multilingual performance in over 100 languages, including but not limited to: Arabic, Chinese, English, French, German, Hindi, Japanese, Korean, Portuguese, Russian, and Spanish. This is useful for global organizations who operate across various languages and require a performant AI model to improve their search systems.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.2xlarge Inference (Batch) Recommended | Model inference on the ml.g5.2xlarge instance type, batch mode | $3.50 |
ml.g5.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.2xlarge instance type, real-time mode | $3.50 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $3.50 |
ml.g5.xlarge Inference (Real-Time) | Model inference on the ml.g5.xlarge instance type, real-time mode | $3.50 |
ml.g6.xlarge Inference (Real-Time) | Model inference on the ml.g6.xlarge instance type, real-time mode | $3.50 |
Vendor refund policy
No refunds.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Request Priority: Added priority field to chat, embed and rerank requests. High priority requests are handled first, and dropped last when the system is under load, ensuring lower latency and higher availability for high priority requests when there’s a mix of workloads with different latency requirements (e.g. realtime user requests and background batch jobs)
Removed padded tokens from sparse embedding responses to reduce unnecessary computation and enhance accuracy for token-sparse inputs. Enhanced similarity calculation: Adopted cosine similarity (cosineSim) for more precise relevance scoring in embedding comparisons. Validated stability: Completed end-to-end testing in production and staging environments to ensure reliability. Temporary parameter limit: Restricted max_n to optimize performance during initial rollout (to be adjusted in a future update).
Additional details
Inputs
- Summary
Model input summary here: The model accepts JSON requests that specifies the input texts to be reranked. The maximum number number of documents that can be passed into a single rerank call is 1000. Note: The documentation below is for Version 2 of the Rerank API.
Req { “model”: “...”, "query": "...?", "documents": [“”...], "max_tokens_per_doc": 1, "top_n": 100 }
Res
{ "results": [ { "index": 0, "relevance_score": 0.0048297215 } ],
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
query | The search query. Queries longer than 2000 tokens get automatically truncated. | Type: FreeText
| Yes |
documents | A list of texts that will be compared to the `query`. For optimal performance we recommend against sending more than 1,000 documents in a single request. **Note**: long documents will automatically be truncated to the value of max_tokens_per_doc. **Note**: structured data should be formatted as YAML strings for best performance. | Type: FreeText | No |
top_n | Limits the number of returned rerank results to the specified value. If not passed, all the rerank results will be returned. | Default value: []
Type: Integer
Minimum: 1 | No |
max_tokens_per_doc | Defaults to 4096. Long documents will be automatically truncated to the specified number of tokens. Compatibility: 'max_tokens_per_doc' is a parameter introduced in Rerank API Version 2 (`"api_version": 2`). | Default value: 4096
Type: Integer
Minimum: 1
Maximum: 40000 | No |
Resources
Vendor resources
Support
Vendor support
Contact us at support+aws@cohere.com or at https://cohere.com/contact-sales support+aws@cohere.comÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




