Overview
ZeroEntropy delivers the fastest and most accurate agentic retrieval engine on the market. Our ZeroEntropy Rerankers (zerank-1 & zerank-1-small) Boost the precision of any semantic or lexical search system with ZeroEntropy's state-of-the-art rerankers. Our models rescore and reorder results to surface the most relevant answers--reducing noise, improving accuracy, and cutting hallucinations in RAG and AI agents across verticals. Trained with a novel Elo-based pipeline, zerank-1 is an open-weight model, and consistently outperforms Cohere, Salesforce, and even LLM rerankers from Google and OpenAI--delivering higher accuracy at lower latency and cost.
Highlights
- Unmatched reranking accuracy across verticals: legal, manufacturing, financial, medical, STEM, conversational, and code search.
- Proven performance edge - outperforms Cohere rerank-3.5, Salesforce LlamaRank, and LLM rerankers in benchmarks.
- Enterprise-ready - fast API integration, Hugging Face availability of weights, transparent pricing at half the market cost.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g6e.xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.xlarge instance type, real-time mode | $0.93 |
ml.g6.xlarge Inference (Batch) Recommended | Model inference on the ml.g6.xlarge instance type, batch mode | $0.403 |
ml.g5.xlarge Inference (Batch) | Model inference on the ml.g5.xlarge instance type, batch mode | $0.503 |
ml.g5.2xlarge Inference (Batch) | Model inference on the ml.g5.2xlarge instance type, batch mode | $0.606 |
ml.g5.4xlarge Inference (Batch) | Model inference on the ml.g5.4xlarge instance type, batch mode | $0.812 |
ml.g5.8xlarge Inference (Batch) | Model inference on the ml.g5.8xlarge instance type, batch mode | $1.22 |
ml.g5.12xlarge Inference (Batch) | Model inference on the ml.g5.12xlarge instance type, batch mode | $2.84 |
ml.g5.16xlarge Inference (Batch) | Model inference on the ml.g5.16xlarge instance type, batch mode | $2.05 |
ml.g5.24xlarge Inference (Batch) | Model inference on the ml.g5.24xlarge instance type, batch mode | $4.07 |
ml.g5.48xlarge Inference (Batch) | Model inference on the ml.g5.48xlarge instance type, batch mode | $8.14 |
Vendor refund policy
Please contact support@zeroentropy.dev or ping us on our community slack (https://go.zeroentropy.dev/slack )
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Updated inference image to be lower latency, higher throughput, and fully utilize multi-GPU instance types.
Additional details
Inputs
- Summary
The input to the reranker is a query and a batch of documents to rerank.
{ "query": "<string>", "documents": [ "<string>" ] }
- Limitations for input type
- There is no limit. For fast response times, we recommend client side limits of no more than 1024 documents and no more than 5MB of UTF-8 bytes in a single request. Very large payloads could increase latency for other API requests being routed to the same node.
- Input MIME type
- application/json
Support
Vendor support
Contact support@zeroentropy.dev
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.