Overview
Neural Magic's DeepSparse AMI is an inference runtime that allows you to create an EC2 instance capable of running state-of-the-art machine learning models with GPU-class performance on x86 instance types. This allows you to run your machine learning workloads without concern for specific hardware accelerators. Simply select from a broad range of instance types based on the performance and cost requirements of your use case and deploy.
The deployed DeepSparse instance also comes with built-in benchmarking capabilities to help you assess the performance and cost benefits of your deployed model in a variety of scenarios.
This AMI contains the DeepSparse Enterprise Cloud Distribution which allows for commercial deployments.
Highlights
- Run ML models on a broad range of x86 instances without the need for hardware acceleration
- High performance and low-cost ML inferencing of popular SOTA models
- Simple APIs for easy integration into existing applications
Details
Typical total price
$0.277/hour
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Instance type | Product cost/hour | EC2 cost/hour | Total/hour |
---|---|---|---|
m5.large | $0.023 | $0.096 | $0.119 |
m5.xlarge | $0.045 | $0.192 | $0.237 |
m5.2xlarge | $0.09 | $0.384 | $0.474 |
m5.4xlarge | $0.18 | $0.768 | $0.948 |
m5.8xlarge | $0.36 | $1.536 | $1.896 |
m5.12xlarge | $0.54 | $2.304 | $2.844 |
m5.16xlarge | $0.72 | $3.072 | $3.792 |
m5.24xlarge | $1.08 | $4.608 | $5.688 |
m5.metal | $1.08 | $4.608 | $5.688 |
m6i.large | $0.023 | $0.096 | $0.119 |
Additional AWS infrastructure costs
Type | Cost |
---|---|
EBS General Purpose SSD (gp3) volumes | $0.08/per GB/month of provisioned storage |
Vendor refund policy
No refunds
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Decoder-only text generation LLMs are optimized in DeepSparse and offer state-of-the-art performance with sparsity! https://github.com/neuralmagic/deepsparse/releases/tag/v1.6.0
Additional details
Usage instructions
-
Subscribe to the software and continue to "Launch", making sure to choose your EC2 Instance Type.
-
In the "Security Group Settings" section, click "Create New Based On Seller Settings". You will see that Port 22 is open for SSH access and Port 5543 is open for DeepSparse Server will accept traffic on this port. You would like to use different ports with an existing security group, please keep track of any changed ports in further usage instructions.
-
In the "Key Pair Settings" section, use an existing key or create a new key pair. You will need this key to SSH into the instance.
-
Click "Launch" to start your EC2 Instance enabled with the DeepSparse Inference Runtime. You can find connection instructions on the instance page, using your SSH key and the IPv4 address of your new instance.
Full technical instructions available at: https://github.com/neuralmagic/deepsparse/tree/rs/aws-ami-example/examples/aws-ami
Resources
Support
Vendor support
The Neural Magic team provides multiple channels for customers to get help with our product and find answers. Users may browse our documentation ( https://docs.neuralmagic.com ) to learn more about DeepSparse or post questions, bugs, and feature requests ( https://github.com/neuralmagic/deepsparse/issues ) via our issue queue. Additionally, anyone can interact with other DeepSparse users and staff via the Neural Magic Community ( https://discuss.neuralmagic.com ).
For options on contract or Enterprise Support, please reach out to Neural Magic Sales ( https://neuralmagic.com/deepsparse/#form ) with any questions.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.