Amazon Sagemaker
Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. With Amazon SageMaker, all the barriers and complexity that typically slow down developers who want to use machine learning are removed. The service includes models that can be used together or independently to build, train, and deploy your machine learning models.

Liquid LFM 7B (L40S) Free trial
By:
Latest Version:
1.0.0
Liquid LFM 7B is designed to handle complex tasks, offering an optimal balance between size and output quality.
Product Overview
LFM-7B is specifically optimized for response quality, accuracy, and usefulness. To assess its chat capabilities, we leverage a diverse frontier LLM jury to compare responses generated by LFM-7B against other models in the 7B-8B parameter category. It allows us to reduce individual biases and produce more reliable comparisons.
Key Data
Version
Type
Model Package
Highlights
Innovative Model Architecture: Liquid AI's Foundation Models utilize a unique architecture that combines liquid neural networks and non-transformer designs, allowing these models to be efficient in memory usage and capable of handling sequential data, such as text, video, and real-time signals. This setup optimizes performance while minimizing computational demands.
Enhanced Adaptability and Real-Time Learning: Unlike conventional models, LFMs can adapt their internal processes based on new inputs in real time, making them highly responsive.
Efficiency in Long-Context Processing: Liquid AI's models can efficiently process extended input sequences without the steep memory and processing requirements typical of transformer-based models, supporting applications like document summarization and complex chatbot interactions with minimal hardware demands. With LFMs, it’s possible to fit up to 1 million tokens-worth of data and map it onto 16 gigabytes of memory. **
Not quite sure what you’re looking for? AWS Marketplace can help you find the right solution for your use case. Contact us
Pricing Information
Use this tool to estimate the software and infrastructure costs based your configuration choices. Your usage and costs might be different from this estimate. They will be reflected on your monthly AWS billing reports.
Contact us to request contract pricing for this product.
Estimating your costs
Choose your region and launch option to see the pricing details. Then, modify the estimated price by choosing different instance types.
Version
Region
Software Pricing
Model Realtime Inference$3.00/hr
running on ml.g6e.2xlarge
Model Batch Transform$10.00/hr
running on ml.g4dn.12xlarge
Infrastructure PricingWith Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
With Amazon SageMaker, you pay only for what you use. Training and inference is billed by the second, with no minimum fees and no upfront commitments. Pricing within Amazon SageMaker is broken down by on-demand ML instances, ML storage, and fees for data processing in notebooks and inference instances.
Learn more about SageMaker pricing
SageMaker Realtime Inference$2.8026/host/hr
running on ml.g6e.2xlarge
SageMaker Batch Transform$4.89/host/hr
running on ml.g4dn.12xlarge
About Free trial
Try this product for 7 days. There will be no software charges, but AWS infrastructure charges still apply. Free Trials will automatically convert to a paid subscription upon expiration.
Model Realtime Inference
For model deployment as Real-time endpoint in Amazon SageMaker, the software is priced based on hourly pricing that can vary by instance type. Additional infrastructure cost, taxes or fees may apply.InstanceType | Realtime Inference/hr | |
---|---|---|
ml.g6e.xlarge | $3.00 | |
ml.g6e.4xlarge | $3.00 | |
ml.g6e.2xlarge Vendor Recommended | $3.00 | |
ml.g6e.16xlarge | $3.00 | |
ml.g6e.8xlarge | $3.00 |
Usage Information
Model input and output details
Input
Summary
The model leverages OpenAI's chat format as detailed in OpenAI API documentation , with the following key specifics:
- Supports text-only interactions.
- Mandates the
model
parameter to be explicitly set to/opt/ml/model
for proper functionality.
Input MIME type
application/jsonSample input data
Output
Summary
The model produces output consistent with OpenAI's chat template, as outlined in the OpenAI API documentation .
Output MIME type
application/jsonSample output data
Sample notebook
Additional Resources
End User License Agreement
By subscribing to this product you agree to terms and conditions outlined in the product End user License Agreement (EULA)
Support Information
AWS Infrastructure
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Learn MoreRefund Policy
We don’t offer refunds, but we’re happy to assist! Contact us anytime at [support+aws@liquid.ai](mailto:support+aws@liquid.ai).
Customer Reviews
There are currently no reviews for this product.
View allWrite a review
Share your thoughts about this product.
Write a customer review