
Overview
The LLM Shield is a cutting-edge security solution designed to protect your language model from malicious prompts and attacks. Acting as a robust LLM firewall, it utilizes sophisticated algorithms to predict whether a prompt is a jailbreak attempt, safe, or an injection attack. This proactive approach ensures your LLM-based systems operate securely and reliably, minimizing the risk of unauthorized access or manipulation.
It utilizes state-of-the-art technology to accurately predict and categorize prompts. It offers instant protection against injection attacks and jailbreak attempts, safeguarding your LLM in real time. It is easy to integrate with existing systems, providing seamless protection without complicating your workflow.
Highlights
- Protects your LLMs from malicious prompts, ensuring safe and reliable operation.
- Automates threat detection, saving time and resources while maintaining high security standards.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $100.00 |
ml.t2.medium Inference (Real-Time) Recommended | Model inference on the ml.t2.medium instance type, real-time mode | $5.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $100.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $100.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $100.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $100.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $100.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $100.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $100.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $100.00 |
Vendor refund policy
We do not provide any usage related refunds at this time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Feature updates and bug fixes
Additional details
Inputs
- Summary
A json document conatining the text to be analyzed in the format { "Prompt": "Some Text" }
- Limitations for input type
- Test should be utf-8 encoded
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
Prompt | Prompt being passed on to the LLM | Type: FreeText
Limitations: Less than 1024 tokens | Yes |
Resources
Support
Vendor support
Business hours email support marketplaceSupp@harman.com
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
