Overview
This pre-configured AMI includes a Virtuoso 08.03.3333 instance and the OPAL Middleware Layer, collectively providing a powerful platform for creating, deploying, and using large language model (LLM)-based AI agents with the following capabilities:
- Direct execution of declarative query languages such as SQL, SPARQL, and GraphQL, enabling powerful RAG and GraphRAG processing pipelines that leverage a variety of loosely coupled data, information, and knowledge sources.
- Loosely coupled integration with any web service described using an OpenAPI-compliant YAML or JSON description document.
- Support for a variety of large language models from providers such as OpenAI, Anthropic, Google, DeepSeek, Alibaba, Meta, Microsoft, and others.
- Loosely coupled mechanisms for identity, identification, authentication, authorization, and storage.
- Support for the Model Context Protocol (MCP) as both a server and a client, where functioning as a client allows the incorporation of tools from other MCP servers.
- Creation and deployment of AI assistants using natural language descriptions expressed in Markdown, such as the Virtuoso Support Assistant, Virtuoso Database Admin Assistant, RSS and OPML Assistant, ODBC and JDBC Driver Assistant, and others.
- Execution of batch operations against LLMs that support batching through their respective implementations of the Batch Processing API, as provided by OpenAI and others.
Highlights
- Simplified development and deployment of secure, cutting-edge AI agents across both public and private networks. These AI Agents are equipped with loose coupling to data spaces (databases, knowledge graphs, and filesystems).
- Model Context Protocol (MCP) support as a Client or Server. Support for Agent-2-Agent (A2A) Protocol support for Agentic workflows.
- Loosely coupled identity, identification, authentication, authorization, and storage.
Details
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Cost/hour |
---|---|
m4.large Recommended | $0.435 |
m3.2xlarge | $0.87 |
r5.4xlarge | $1.74 |
r5.12xlarge | $5.22 |
r4.large | $0.435 |
m4.16xlarge | $5.916 |
r4.xlarge | $0.435 |
r4.8xlarge | $3.48 |
r5.xlarge | $0.435 |
m4.2xlarge | $0.87 |
Vendor refund policy
In line with Amazon Web Services AMI use
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
This pre-configured AMI includes a Virtuoso 08.03.3333 instance and the OPAL Middleware Layer, collectively providing a powerful platform for creating, deploying, and using large language model (LLM)-based AI agents with the following capabilities: 1. Direct execution of declarative query languages such as SQL, SPARQL, and GraphQL, enabling powerful RAG and GraphRAG processing pipelines that leverage a variety of loosely coupled data, information, and knowledge sources. 2. Loosely coupled integration with any web service described using an OpenAPI-compliant YAML or JSON description document. 3. Support for a variety of large language models from providers such as OpenAI, Anthropic, Google, DeepSeek, Alibaba, Meta, Microsoft, and others. 4. Loosely coupled mechanisms for identity, identification, authentication, authorization, and storage. 5. Support for the Model Context Protocol (MCP) as both a server and a client, where functioning as a client allows the incorporation of tools from other MCP servers. 6. Creation and deployment of AI assistants using natural language descriptions expressed in Markdown, such as the Virtuoso Support Assistant, Virtuoso Database Admin Assistant, RSS and OPML Assistant, ODBC and JDBC Driver Assistant, and others. 7. Execution of batch operations against LLMs that support batching through their respective implementations of the Batch Processing API, as provided by OpenAI and others. 8. Support for the Agent-2-Agent (A2A) Protocol for Agentic workflows construction and deployment.
Additional details
Usage instructions
Once this AMI is successfully initialized (which can take 5 minutes or more) you initially setup the OpenLink AI Layer (OPAL) via any of the following:
- http://{ec2-dns-name}:8890/conductor
- via ssh as user "ubuntu"
In either case, you follow the guidelines in the setup and installation document at: https://community.openlinksw.com/t/openlink-personal-assistant-vad-installation-configuration-and-usage-guide/4112
Resources
Vendor resources
Support
Vendor support
Online, Email, and Phone options available support@openlinksw.com
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.