
Overview
LMCache lets LLMs prefill each text only once. By storing the KV caches of all reusable texts, LMCache can reuse the KV caches of any reused text (not necessarily prefix) in any serving engine instance. It thus reduces prefill delay, i.e., time to first token (TTFT), as well as saves the precious GPU cycles.
Highlights
- By combining LMCache with vLLM, LMCaches achieves 3-10x delay savings and GPU cycle reduction in many LLM use cases, including multi-round QA and RAG.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p3.2xlarge Inference (Batch) Recommended | Model inference on the ml.p3.2xlarge instance type, batch mode | $0.00 |
ml.g6.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6.12xlarge instance type, real-time mode | $0.00 |
ml.g6.16xlarge Inference (Real-Time) | Model inference on the ml.g6.16xlarge instance type, real-time mode | $0.00 |
ml.g6.24xlarge Inference (Real-Time) | Model inference on the ml.g6.24xlarge instance type, real-time mode | $0.00 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $0.00 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $0.00 |
ml.g6.4xlarge Inference (Real-Time) | Model inference on the ml.g6.4xlarge instance type, real-time mode | $0.00 |
ml.g6.48xlarge Inference (Real-Time) | Model inference on the ml.g6.48xlarge instance type, real-time mode | $0.00 |
ml.g6.8xlarge Inference (Real-Time) | Model inference on the ml.g6.8xlarge instance type, real-time mode | $0.00 |
ml.g6.xlarge Inference (Real-Time) | Model inference on the ml.g6.xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
- Use 0.1.4 LMCachedocker version
- Add max response token length selection
- Differentiate context from input
Additional details
Inputs
- Summary
- Use "input" to write your question.
- Use "type" to select turning on lmcache or not.
- Use "context" to write your context.
- Use "length" to select the maximum response token.
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
Input text | The input should be a json containing "input", "context", "type", and "length". | Type: FreeText | Yes |
Resources
Vendor resources
Support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
