Listing Thumbnail

    Verbis Graph - GraphRAG Knowledge Retrieval Engine

     Info
    Deployed on AWS
    Cloud-native graph-based retrieval engine that integrates vector similarity with knowledge graph traversal to improve retrieval precision over traditional RAG. Produces grounded, explainable responses with source citations, reducing hallucinations and improving reliability in AI applications.

    Overview

    Play video

    Verbis Graph Engine is a cloud-native graph-enhanced retrieval augmented generation platform designed to improve the accuracy and reliability of AI applications. By combining vector similarity search with knowledge graph traversal, Verbis Graph Engine captures relationships between entities across documents, grounds responses in source data, and provides explainable answers with citations. --- KEY FEATURES: ^^^^^^^^^^^^ -- GraphRAG Hybrid Retrieval Combines vector search with knowledge graph traversal to deliver context-aware, relationship-aware results that traditional vector-only retrieval may miss. -- Explainable, Grounded Responses Answers are supported by source citations, improving transparency and trust in AI outputs. -- Framework & Ecosystem Integrability Native compatibility with LangChain, LlamaIndex, AutoGen, CrewAI, and Amazon Bedrock Agents simplifies integration into existing AI workflows. -- Production-Ready Performance High-performance query engine designed for low-latency retrieval and scalable production workloads. --- WHY GRAPHRAG? ^^^^^^^^^^^^^ Traditional RAG systems rely solely on vector embeddings, which identify semantically similar content but may miss critical relationships between concepts. For example, when asking: Which marketing campaigns were affected by the supply chain disruption in Q3? Vector search may return relevant documents but cannot connect related entities across them. GraphRAG traverses relationships between entities to deliver complete, context-aware answers by modeling how information is connected. --- USE CASES ^^^^^^^^^ -- AI-Powered Knowledge Bases Build intelligent Q&A systems over enterprise documentation, policies, and procedures. -- Customer Support Automation Deploy support assistants that provide grounded answers with citations. -- Research & Analysis Enable multi-document reasoning across complex datasets. -- Compliance & Legal Support auditable, explainable AI responses for regulated industries. -- High-Accuracy Local Knowledge Retrieval Ideal for scenarios requiring precise retrieval from proprietary or locally hosted knowledge bases. --- HOW IT WORKS ^^^^^^^^^^^^ Verbis Graph Engine enables rapid deployment and integration. -- Self-service onboarding -- Integration in minutes using Python or JavaScript SDKs -- Upload documents to build knowledge graphs and embeddings -- Query via REST API or connect to your preferred AI framework --- DEPLOYMENT & ACCESS ^^^^^^^^^^^^^^^^^^^ The paid version includes predefined limits on request volume, data size, and throughput to support evaluation, prototyping, and proof-of-concept workloads. Enterprise deployments and custom scaling options are available. INTEGRATIONS Amazon Bedrock LangChain LlamaIndex AutoGen CrewAI OpenAI Anthropic Claude Amazon Neptune Built by Prodigy AI Solutions Enterprise support and custom deployments available. --- USAGE INSTRACTION ^^^^^^^^^^^^^^^^^ Deploy the container Subscribe to the product in AWS Marketplace. Launch the container using Amazon ECS or Amazon EKS. Ensure port 8080 is open in the security group. Access the application After deployment, obtain the public IP or load balancer URL. Open your browser and navigate to: http://:8080 First use Upload a document for indexing. Wait for the indexing process to complete. Ask questions about the document. Explore the generated knowledge graph. (points 6 to 9 are also covered in this video https://www.youtube.com/watch?v=JhXqYwpJHlE ) API access (optional) Open Swagger UI: http://:8080/docs

    Highlights

    • Reduce compliance and operational risk by grounding AI outputs directly in your source documents, with clear citations and traceable reasoning paths. Unlike vector-only retrieval systems, Verbis Graph Engine builds and queries a knowledge graph to enable multi-hop reasoning across entities and relationships -delivering context-aware answers that can be verified and audited.
    • Retrieve insights that span multiple reports, entities, and relationships - critical for investigations, compliance reviews, risk analysis, and regulated environments. The hybrid graph + vector architecture surfaces cross-document connections that similarity search alone may not detect.
    • Support governance and regulatory requirements with explainable outputs linked directly to source documentation. Enable teams to validate AI-generated responses quickly and maintain structured, audit-ready documentation workflows.

    Details

    Delivery method

    Supported services

    Delivery option
    New delivery option 2

    Latest version

    Operating system
    Linux

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Verbis Graph - GraphRAG Knowledge Retrieval Engine

     Info
    Pricing is based on a fixed subscription cost. You pay the same amount each billing period for unlimited usage of the product. Pricing is prorated, so you're only charged for the number of days you've been subscribed. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Fixed subscription cost

     Info
    Monthly subscription
    $399.00/month

    Vendor refund policy

    Refunds are available only for the unused portion of the included data allowance. Consumed data (uploaded, indexed, processed, or queried) is non-refundable. If a subscription is canceled before the full usage-set is used, a pro-rated refund may be issued based on unused data volume. Refunds are processed via AWS Marketplace in accordance with AWS policies.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    New delivery option 2

    Supported services: Learn more 
    • Amazon EKS Anywhere
    • Amazon ECS Anywhere
    • Amazon EKS
    • Amazon ECS
    Container image

    Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.

    Version release notes

    VERBIS GRAPH ENGINE - GraphRAG Knowledge Retrieval Engine AWS Marketplace Container Edition - Initial paid listing . Licensing via AWS Marketplace.

    VERSION: 1.0.0 (Initial Release) RELEASE DATE: December 2025 PRODUCT SKU: VG-GRAPHRAG-PRO-V1 LICENSE: Apache License 2.0 PRICING: FREE (AWS infrastructure costs apply)

    ================================================================================

    1. EXECUTIVE SUMMARY ================================================================================

    We are pleased to announce the initial release of Verbis Graph Engine PRO on AWS Marketplace. This free container-based product brings GraphRAG (Graph-enhanced Retrieval Augmented Generation) technology to developers and enterprises evaluating next-generation AI knowledge retrieval solutions.

    Unlike traditional vector-only RAG systems, Verbis Graph combines semantic vector search with knowledge graph traversal, enabling retrieval of information that spans multiple documents, entities, and relationships. This hybrid approach is ideal for complex queries requiring reasoning beyond simple similarity matching.

    ================================================================================ 2. WHAT'S NEW IN VERSION 1.0.0

    This initial release establishes the foundation for GraphRAG-powered knowledge retrieval.

    2.1 Core GraphRAG Engine

    o Proprietary Knowledge Graph Retrieval: Combines dense vector embeddings with structured knowledge graph traversal for better accuracy compared to vector-only RAG systems o Multi-Document Reasoning: Retrieves and synthesizes information across document boundaries, capturing entity relationships that span your entire knowledge base o Workspace-Scoped Isolation: Each user or project gets an isolated workspace with independent document staging, GraphRAG indexing, and chat sessions o Async Indexing with Locks: Per-workspace locks prevent indexing conflicts, ensuring data integrity during concurrent operations

    2.2 FastAPI Backend

    o RESTful API: Integrated FastAPI backend with OpenAPI (Swagger) documentation available at /docs, accessible via a dedicated tab in the Streamlit web interface. o Streaming Chat: Server-Sent Events (SSE) for real-time streaming responses from the GraphRAG retriever o CORS Enabled: Cross-Origin Resource Sharing configured for browser-based clients and single-page applications o Health Check Endpoint: GET /_stcore/health for container orchestration and load balancer integration

    2.3 Document Processing

    o Multi-Format Support: Upload and process PDF, TXT, DOCX, and CSV files for knowledge extraction o Document Staging: Stage documents before indexing to review and validate content o Data Retention: Application data is retained within the container runtime and associated workspace for the lifetime of the container and user account. Data is not guaranteed to persist across container termination or redeployment. External durable storage is not included in this version.

    2.4 LLM Integration (only paying subscriptions)

    o LiteLLM Gateway: Unified interface supporting 100+ LLM providers including OpenAI, Anthropic Claude, and AWS Bedrock o AWS Bedrock Native: First-class support for Amazon Bedrock models with configurable region and model selection o Model Flexibility: Switch between Claude, GPT-4, Llama, Mistral, and other models via environment variables

    2.5 Developer Experience

    o Bearer Token Authentication: Secure API access with SERVICE_AUTH_TOKEN for service-level auth plus user bearer tokens for workspace access o Streamlit GUI: Helper interface for visual document management and chat testing

    2.6 AWS Marketplace Ready

    o Metering Hooks: Built-in hooks for AWS Marketplace metering (ingestion, indexing, chat, translation) o Container Compliance: Meets AWS Marketplace container requirements for security scanning and deployment

    ================================================================================ 3. TECHNICAL SPECIFICATIONS

    3.1 Container Details

    ECR Image URI: 709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:metering_v20 Port: 8080/tcp (HTTP) Health Check: GET /_stcore/health Logging: stdout/stderr (CloudWatch compatible) Base Image: Python 3.11 (Debian-based)

    3.2 Resource Requirements

    Minimum Recommended

    vCPU: 1 vCPU 2 vCPU Memory: 2 GiB 4 GiB Storage: 1 GB Depends on workload

    3.3 Supported Services

    o Amazon Elastic Container Service (Amazon ECS) - Managed container orchestration on AWS o Amazon ECS Anywhere - Run on your on-premises or hybrid infrastructure while managed by ECS o AWS Fargate - Serverless container execution (ECS launch type) o Docker-compatible runtimes - Any Docker-compatible environment for evaluation

    3.4 Environment Variables

    SERVICE_AUTH_TOKEN (Required): Service-level auth token (X-Service-Token header)

    ================================================================================ 4. DEPLOYMENT GUIDE

    4.1 Prerequisites

    o AWS Account: Active AWS account with appropriate permissions o IAM Permissions: ecr:GetAuthorizationToken, ecr:BatchCheckLayerAvailability, ecr:GetDownloadUrlForLayer, ecr:BatchGetImage o Networking: VPC with subnets and security group allowing inbound traffic on port 8080/tcp

    4.2 Quick Start (Docker)

    For local evaluation, run the container directly with Docker:

    docker run -p 8080:8080
    709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:metering_v20

    4.3 Amazon ECS Deployment

    1. Subscribe to the product: Click 'Continue to Subscribe' on the AWS Marketplace listing page
    2. Accept terms: Review and accept the Apache License 2.0 terms
    3. Create ECS cluster: Use existing cluster or create new via ECS console
    4. Create task definition: Define container with ECR image URI, port mapping (8080)
    5. Configure networking: Assign VPC, subnets, and security group with port 8080 open
    6. Launch service: Create ECS service with desired task count
    7. Verify deployment: Access /_stcore/health endpoint to confirm container is running

    4.4 Amazon ECS Anywhere (Hybrid/On-Premises)

    For on-premises or hybrid deployments using ECS Anywhere:

    1. Register external instances: Install ECS agent and SSM agent on your on-premises servers
    2. Create ECS Anywhere cluster: Configure cluster with EXTERNAL capacity providers
    3. Deploy task: Use the same task definition with your registered external instances

    Note: ECS Anywhere could incur additional costs (eg $0.01025 per instance-hour).

    ================================================================================ 5. API REFERENCE

    Key API endpoints available in this release:

    Method Endpoint Description


    GET /_stcore/health Health check endpoint POST /api/auth/register Register new user account POST /api/auth/login Authenticate and obtain bearer token POST /api/docs/upload Upload documents to workspace POST /api/docs/index Trigger GraphRAG indexing POST /api/chat Query GraphRAG (streaming SSE) GET /docs OpenAPI/Swagger documentation

    ================================================================================ 6. KNOWN LIMITATIONS & SCOPE

    6.1 Usage Limits

    o Single-Tenant: One container instance per deployment; not designed for multi-tenant production workloads

    6.2 Technical Limitations

    o Region Availability: Available in all AWS regions supported by AWS Marketplace container products o Data Retention: Application-level data retention during the lifetime of the container and user account. Persistence across container termination or redeployment is not guaranteed. o No High Availability: Single container deployment; no built-in clustering or failover

    6.3 Not Included

    o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o VPC Private Link deployment o SOC 2 compliance features o Dedicated support SLA

    Enterprise features are available in Verbis Graph Professional and Enterprise editions.

    ================================================================================ 7. SECURITY CONSIDERATIONS

    7.1 Authentication

    o Two-Layer Auth: SERVICE_AUTH_TOKEN for service access + workspace operations o Token Security: Always used strong, randomly generated tokens;

    7.2 Network Security

    o TLS Termination: Use ALB or API Gateway with HTTPS for production deployments o Security Groups: Restrict ports access to trusted IPs or internal VPC only

    7.3 Data Security

    o LLM queries are sent to configured providers

    ================================================================================ 8. TROUBLESHOOTING

    Issue Solution


    Image pull failures Verify ECR pull permissions and AWS Marketplace subscription is active Container OOM killed Increase memory allocation to 4 GiB; large documents require more memory for indexing

    ================================================================================ 9. UPCOMING FEATURES (ROADMAP)

    Q1 2026: o Native MCP Server for AI Agent integration (Claude, GPT) o LangChain and LlamaIndex framework integrations o Professional tier launch on AWS Marketplace

    Q2 2026: o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o SOC 2 Type II certification (in progress) o Amazon Bedrock Agents native integration

    Q3-Q4 2026: o VPC PrivateLink deployment option o Multi-tenant SaaS deployment

    ================================================================================ 10. SUPPORT & CONTACT

    10.1 Support

    support is provided as follows: o Response Time: Within 1 business day during EU business hours o Channels: Email and documentation only o Scope: Deployment assistance and basic troubleshooting

    10.2 Contact Information

    Email: support@verbis-chat.com  Documentation: https://docs.verbisgraph.com  Website: https://verbisgraph.com 

    ================================================================================ LEGAL NOTICE

    This product is provided 'as is' under the Apache License 2.0. ProdigyAI Solutions makes no warranties regarding fitness for a particular purpose. AWS infrastructure costs are the customer's responsibility.

    2025 ProdigyAI Solutions. All rights reserved. Verbis Graph is a trademark of ProdigyAI Solutions.

    Additional details

    Usage instructions

    Quick start (Docker)

    Pull the image (replace with your Marketplace image/tag if required): docker pull <MARKETPLACE_ECR_REPO>:<TAG>

    Run the container: docker run --rm -p 8080:8080 <MARKETPLACE_ECR_REPO>:<TAG>

    Open the UI: http://localhost:8080 

    Health check: curl -f http://localhost:8080/_stcore/health  (expects HTTP 200)

    AWS ECS Fargate (high level)

    Create an ECS task definition using this image.

    Container port: 8080/TCP

    Recommended task size for demo: 1 vCPU / 2 GB RAM

    Assign public IP (for direct access) or place behind an ALB.

    If using an ALB, set target group health check path to: /_stcore/health.

    Configuration

    No mandatory environment variables required.

    Default port is 8080. Logs go to stdout/stderr (view via docker logs or CloudWatch Logs on ECS).

    The container runs as a non-root user.

    Troubleshooting

    If the UI is not reachable, confirm the port mapping/security group allows inbound TCP 8080.

    If health checks fail on first start, allow a warm-up period (start-period ~60s recommended on ECS).

    Teardown / cost control (ECS)

    Scale ECS service desired tasks to 0 and delete the service to stop charges.

    Remove associated load balancer/resources if created.

    Launch the container using Amazon ECS or Amazon EKS.

    Ensure port 8080 is open in the security group.

    Access the application After deployment, obtain the public IP or load balancer URL.

    Open your browser and navigate to:

    http://<server-ip>:8080 First use Upload a document for indexing.

    Wait for the indexing process to complete.

    Ask questions about the document.

    Explore the generated knowledge graph. (points 6 to 9 are also covered in this video https://www.youtube.com/watch?v=JhXqYwpJHlE )

    API access (optional) Open Swagger UI:

    http://<server-ip>:8080/docs

    Resources

    Vendor resources

    Support

    Vendor support

    Support for the Verbis Graph Engine is provided via email and self-service resources. Email Support: support@verbisgraph.com  Support Hours: 09:00 - 21:00 (EU time), Monday - Saturday Self-Service Support: 24/7 AI-powered chatbot available at https://verbisgraph.com  Support is intended for general questions, onboarding guidance, and issue reporting.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.