
Overview
Text to Speech Synthesizer is a next-gen solution for lifelike speech synthesis. Our technology effortlessly converts text into human-like speech, enhanced by SSML tag support for precise control over pauses, numbers, and date/time formatting. Experience cost-effective, CPU-optimized performance across multiple voices and languages. With eNova, you'll achieve unparalleled customization for both short and long text inputs. Upgrade your communication strategy with Text to Speech and elevate user experiences today!
N.B. Only batch transform jobs are supported
Highlights
- This model is for converting text input to Speech in some specific voices and languages. Additiionally, it supports some SSML tags to customize the spoken text. This is tuned for cpu uses and can be used for short and long text inputs. **Some of the voices supported for SSML in English are :** 1. Harvard 2. Blizzard 3. Ljspeech 4. Marry **Some of the supported languages are:** 1. English : 'en-us' 2. German : 'de-de' 3. French : 'fr-fr' 4. Italian : 'it-it' **Some of the supported SSML tags are :** 1. speak 2. break 3. say-as 4. voice
- The solution can be used in industries like media and entertainment, software, mobile applications, hospitatlity, healthcare, legal etc. to provide text to speech services. This can also be used to develop many solutions requiring text to speech like voice bots and virtual assistants.
- Need more machine learning, deep learning, NLP and Quantum Computing solutions. Reach out to us at Harman DTS.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $100.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $100.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $100.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $100.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $100.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $100.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $100.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $100.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $100.00 |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $100.00 |
Vendor refund policy
We do not provide any usage related refunds at this time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Feature enhancements and bug fixes
Additional details
Inputs
- Summary
The text is provided in SSML format. Possible tags : encloses content : custom pronunciation : references lexicon : text substitution : pauses : text interpretation : voice profiles
: paragraphs
- Input MIME type
- text/xml
Resources
Support
Vendor support
Business hours email support marketplaceSupp@harman.comÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

