
Overview
Use NeoPulse® to build models for most types of machine learning problems including, but not limited to, sentiment analysis, object detection, object recognition, classification and regression. Novices and experts can easily create AI models, using custom data, with as little as 14 lines of code in the NeoPulse® Modeling Language (NML). Combined with the power of SageMaker, it is possible to rapidly train, deploy and scale models both in the cloud and on devices.
Highlights
- Sagemaker provides the infrastructure for training and hosting deep learning models for real-time or batch inference. This means zero installation hassle; enabling you to focus on creating deep learning models.
- Sagemaker output contains a trained “Portable Inference Model” (PIM) that can be run on the soon to be released NeoPulse® Query Runtime (NPQR) 3.0 on any machine, in multiple cloud environments, and on IoT devices.
- NeoPulse® has been used by both startups and large enterprises to significantly reduce the cost of running AI projects – in many cases by over 90%.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p2.xlarge Inference (Batch) Recommended | Model inference on the ml.p2.xlarge instance type, batch mode | $9.00 |
ml.p2.xlarge Inference (Real-Time) Recommended | Model inference on the ml.p2.xlarge instance type, real-time mode | $6.00 |
ml.p3.2xlarge Training Recommended | Algorithm training on the ml.p3.2xlarge instance type | $12.00 |
ml.p2.8xlarge Inference (Batch) | Model inference on the ml.p2.8xlarge instance type, batch mode | $12.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $12.00 |
ml.p2.16xlarge Inference (Batch) | Model inference on the ml.p2.16xlarge instance type, batch mode | $9.00 |
ml.p3.8xlarge Inference (Batch) | Model inference on the ml.p3.8xlarge instance type, batch mode | $12.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $12.00 |
ml.p2.8xlarge Inference (Real-Time) | Model inference on the ml.p2.8xlarge instance type, real-time mode | $8.00 |
ml.p2.16xlarge Inference (Real-Time) | Model inference on the ml.p2.16xlarge instance type, real-time mode | $6.00 |
Vendor refund policy
You can cancel at any time. We do not offer refunds.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Major bug fixes and feature enhancements.
Additional details
Inputs
- Summary
Usage Instructions
NeoPulse® framework is quite a different product when compared to traditional algorithms on Sagemaker platform. Our application has been adapted to work with Sagemaker platform without loosing any of the features of NeoPulse® framework. Our framework is designed to be a one stop solutions for all your application AI needs. As opposed to the traditional algorithms on Sagemaker which requires the user to pass just the data as input for training, we require an additional file along with the data which is a propitiatory language file with an extension ".nml". We call this file as NeoPulse Modelling Language (NML) file. An NML file is to be thought of a script that can be developed easily with some knowledge of NML. This file contains the relative paths to the data that is to be trained or require inference information, type of the data that is being passed, and many other specification whose explanation can be found in our documentation .
Sample Jupiter notebook
To get started quickly we have a sample jupiter notebook created. This notebook prepares the data, trains a models, create live inference endpoint, create predictions and removes the endpoint.
Link to the sample notebook can be found here
Input data formats
The data needs to prepared and stored in a specific format that is explained in the notebook above.
Sample input data
The data needs to downloaded and a csv containing the paths to downloaded data is required. To help with creating this we have script that does all the aforementioned steps. Please find the link to the script here
- Input MIME type
- application/zip
Resources
Support
Vendor support
Contact NeoPulse® support at: support@neopulse.ai
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
