Overview
Free tier Verbis Graph Engine is a enhanced GraphRAG (Graph-enhanced Retrieval Augmented Generation) platform that delivers 35% more accurate AI responses compared to traditional vector-only RAG systems. By combining knowledge graphs with vector search, Verbis Graph captures complex relationships between entities, reduces hallucinations, and provides fully explainable answers with 100% citation coverage. KEY FEATURES
- GraphRAG Technology: Hybrid retrieval combining vector similarity search with knowledge graph traversal for superior accuracy and context-aware responses
- Framework Integrations: Native support for LangChain, LlamaIndex, AutoGen, CrewAI, and Amazon Bedrock Agents
- High-performance, low-latency query engine optimized for production workloads WHY GRAPHRAG? Traditional RAG systems rely solely on vector embeddings, which often miss critical relationships between concepts. When you ask "Which marketing campaigns were affected by the supply chain disruption in Q3?", vector search finds similar documents but cannot connect the dots. GraphRAG traverses entity relationships to deliver complete, accurate answers - reducing hallucinations by modeling how information actually connects.
USE CASES
- AI-Powered Knowledge Bases: Build intelligent Q&A systems over enterprise documentation, policies, and procedures
- Customer Support Automation: Deploy accurate chatbots that provide cited answers from your knowledge base
- Research & Analysis: Enable researchers to query complex datasets with multi-hop reasoning
- Compliance & Legal: Provide auditable, explainable AI responses for regulated industries
- High-Accuracy Local Knowledge Retrieval: Ideal for scenarios requiring precise, context-aware retrieval from proprietary or locally hosted knowledge bases. GETTING STARTED Start with our free tier or sales call required. Verbis Graph Engine offers self-service signup with 5-minute integration via our Python and JavaScript SDKs. Upload your documents, and our platform automatically builds knowledge graphs and vector embeddings. Query via REST API or connect your favorite AI framework. FREE TIER Includes predefined limits on request volume, data size, and throughput to support evaluation, prototyping, and proof-of-concept workloads. INTEGRATIONS Amazon Bedrock, LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI, Anthropic Claude, Amazon Neptune. Built by Prodigy AI Solutions. Enterprise support and custom deployments available.
Highlights
- Reduce AI Hallucinations with GraphRAG Traditional vector-only RAG can miss critical relationships between concepts. GraphRAG builds knowledge graphs from your documents to enable multi-hop reasoning and more context-aware answers, delivering grounded, explainable AI responses with clear source attribution.
- Hybrid Vector and Graph Retrieval Engine Combine semantic vector search with knowledge graph traversal to retrieve information that spans multiple documents, entities, and relationships - ideal for complex queries that require reasoning beyond simple similarity matching.
- Designed for Modern AI Frameworks Native integration with LangChain, LlamaIndex, AutoGen, CrewAI, and Amazon Bedrock Agents enables rapid adoption and seamless embedding into existing AI pipelines and applications.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Vendor refund policy
This product is offered as a free edition at no cost. As no fees are charged for usage, refunds are not applicable. If you have questions about access, usage, or account-related issues, please contact our support team at support@verbisgraph.com
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Verbis Graph Engine – Free Demo (Container, Streamlit UI)
- Amazon ECS
- Amazon ECS Anywhere
Container image
Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.
Version release notes
VERBIS GRAPH ENGINE - GraphRAG Knowledge Retrieval Engine AWS Marketplace Container Edition - Free Demo
VERSION: 1.0.0 (Initial Release) RELEASE DATE: December 2025 PRODUCT SKU: VG-GRAPHRAG-FREE-DEMO-V1 LICENSE: Apache License 2.0 PRICING: FREE (AWS infrastructure costs apply)
================================================================================
- EXECUTIVE SUMMARY ================================================================================
We are pleased to announce the initial release of Verbis Graph Engine Demo on AWS Marketplace. This free container-based product brings GraphRAG (Graph-enhanced Retrieval Augmented Generation) technology to developers and enterprises evaluating next-generation AI knowledge retrieval solutions.
Unlike traditional vector-only RAG systems, Verbis Graph combines semantic vector search with knowledge graph traversal, enabling retrieval of information that spans multiple documents, entities, and relationships. This hybrid approach is ideal for complex queries requiring reasoning beyond simple similarity matching.
================================================================================ 2. WHAT'S NEW IN VERSION 1.0.0
This initial release establishes the foundation for GraphRAG-powered knowledge retrieval.
2.1 Core GraphRAG Engine
o Proprietary Knowledge Graph Retrieval: Combines dense vector embeddings with structured knowledge graph traversal for better accuracy compared to vector-only RAG systems o Multi-Document Reasoning: Retrieves and synthesizes information across document boundaries, capturing entity relationships that span your entire knowledge base o Workspace-Scoped Isolation: Each user or project gets an isolated workspace with independent document staging, GraphRAG indexing, and chat sessions o Async Indexing with Locks: Per-workspace locks prevent indexing conflicts, ensuring data integrity during concurrent operations
2.2 FastAPI Backend
o RESTful API: Integrated FastAPI backend with OpenAPI (Swagger) documentation available at /docs, accessible via a dedicated tab in the Streamlit web interface. o Streaming Chat: Server-Sent Events (SSE) for real-time streaming responses from the GraphRAG retriever o CORS Enabled: Cross-Origin Resource Sharing configured for browser-based clients and single-page applications o Health Check Endpoint: GET /_stcore/health for container orchestration and load balancer integration
2.3 Document Processing
o Multi-Format Support: Upload and process PDF, TXT, DOCX, and CSV files for knowledge extraction o Document Staging: Stage documents before indexing to review and validate content o Data Retention: Application data is retained within the container runtime and associated workspace for the lifetime of the container and user account. Data is not guaranteed to persist across container termination or redeployment. External durable storage is not included in the free demo.
2.4 LLM Integration (only paying subscriptions)
o LiteLLM Gateway: Unified interface supporting 100+ LLM providers including OpenAI, Anthropic Claude, and AWS Bedrock o AWS Bedrock Native: First-class support for Amazon Bedrock models with configurable region and model selection o Model Flexibility: Switch between Claude, GPT-4, Llama, Mistral, and other models via environment variables
2.5 Developer Experience
o Bearer Token Authentication: Secure API access with SERVICE_AUTH_TOKEN for service-level auth plus user bearer tokens for workspace access o Streamlit GUI: Helper interface for visual document management and chat testing
2.6 AWS Marketplace Ready
o Metering Hooks: Built-in hooks for AWS Marketplace metering (ingestion, indexing, chat, translation) - disabled in free demo o Container Compliance: Meets AWS Marketplace container requirements for security scanning and deployment
================================================================================ 3. TECHNICAL SPECIFICATIONS
3.1 Container Details
ECR Image URI: 709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:verbis-demo Port: 8080/tcp (HTTP) Health Check: GET /_stcore/health Logging: stdout/stderr (CloudWatch compatible) Base Image: Python 3.11 (Debian-based)
3.2 Resource Requirements
vCPU: 1 vCPU 2 vCPU Memory: 2 GiB 4 GiB Storage: 1 GB Depends on workload
3.3 Supported Services
o Amazon Elastic Container Service (Amazon ECS) - Managed container orchestration on AWS o Amazon ECS Anywhere - Run on your on-premises or hybrid infrastructure while managed by ECS o AWS Fargate - Serverless container execution (ECS launch type) o Docker-compatible runtimes - Any Docker-compatible environment for evaluation
3.4 Environment Variables
SERVICE_AUTH_TOKEN (Required): Service-level auth token (X-Service-Token header)
================================================================================ 4. DEPLOYMENT GUIDE
4.1 Prerequisites
o AWS Account: Active AWS account with appropriate permissions o IAM Permissions: ecr:GetAuthorizationToken, ecr:BatchCheckLayerAvailability, ecr:GetDownloadUrlForLayer, ecr:BatchGetImage o Networking: VPC with subnets and security group allowing inbound traffic on port 8080/tcp
4.2 Quick Start (Docker)
For local evaluation, run the container directly with Docker:
docker run -p 8080:8080
709825985650.dkr.ecr.us-east-1.amazonaws.com/prodigy-ai-solutions/verbis-graph-engine-free:verbis-demo
4.3 Amazon ECS Deployment
- Subscribe to the product: Click 'Continue to Subscribe' on the AWS Marketplace listing page
- Accept terms: Review and accept the Apache License 2.0 terms
- Create ECS cluster: Use existing cluster or create new via ECS console
- Create task definition: Define container with ECR image URI, port mapping (8080)
- Configure networking: Assign VPC, subnets, and security group with port 8080 open
- Launch service: Create ECS service with desired task count
- Verify deployment: Access /_stcore/health endpoint to confirm container is running
4.4 Amazon ECS Anywhere (Hybrid/On-Premises)
For on-premises or hybrid deployments using ECS Anywhere:
- Register external instances: Install ECS agent and SSM agent on your on-premises servers
- Create ECS Anywhere cluster: Configure cluster with EXTERNAL capacity providers
- Deploy task: Use the same task definition with your registered external instances
Note: ECS Anywhere could incur additional costs (eg $0.01025 per instance-hour).
================================================================================ 5. API REFERENCE
Key API endpoints available in this release:
Method Endpoint Description
GET /_stcore/health Health check endpoint POST /api/auth/register Register new user account POST /api/auth/login Authenticate and obtain bearer token POST /api/docs/upload Upload documents to workspace POST /api/docs/index Trigger GraphRAG indexing POST /api/chat Query GraphRAG (streaming SSE) GET /docs OpenAPI/Swagger documentation
Note: Some authentication and user management endpoints are provided for demonstration purposes only in the free demo.
================================================================================ 6. KNOWN LIMITATIONS & DEMO SCOPE
IMPORTANT: This is a free demonstration version intended for evaluation purposes only.
6.1 Usage Limits
o Single-Tenant: One container instance per deployment; not designed for multi-tenant production workloads o No SLA: Best-effort support only; no uptime guarantees or service credits o Demo Data Only: Not suitable for PII, regulated data (HIPAA, GDPR), or production workloads
6.2 Technical Limitations
o Region Availability: Available in all AWS regions supported by AWS Marketplace container products o Data Retention: Application-level data retention during the lifetime of the container and user account. Persistence across container termination or redeployment is not guaranteed. o No High Availability: Single container deployment; no built-in clustering or failover o Metering Disabled: AWS Marketplace metering hooks are present but disabled in free demo
6.3 Not Included in Demo
o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o VPC Private Link deployment o SOC 2 compliance features o Dedicated support SLA
Enterprise features are available in Verbis Graph Professional and Enterprise editions.
================================================================================ 7. SECURITY CONSIDERATIONS
7.1 Authentication
o Two-Layer Auth: SERVICE_AUTH_TOKEN for service access + workspace operations o Token Security: Always used strong, randomly generated tokens;
7.2 Network Security
o TLS Termination: Use ALB or API Gateway with HTTPS for production deployments o Security Groups: Restrict ports access to trusted IPs or internal VPC only
7.3 Data Security
o LLM queries are sent to configured providers
================================================================================ 8. TROUBLESHOOTING
Issue Solution
Image pull failures Verify ECR pull permissions and AWS Marketplace subscription is active Container OOM killed Increase memory allocation to 4 GiB; large documents require more memory for indexing
================================================================================ 9. UPCOMING FEATURES (ROADMAP)
Q1 2026: o Native MCP Server for AI Agent integration (Claude, GPT) o LangChain and LlamaIndex framework integrations o Professional tier launch on AWS Marketplace
Q2 2026: o Enterprise SSO/SAML authentication o Role-Based Access Control (RBAC) o SOC 2 Type II certification (in progress) o Amazon Bedrock Agents native integration
Q3-Q4 2026: o VPC PrivateLink deployment option o Multi-tenant SaaS deployment
================================================================================ 10. SUPPORT & CONTACT
10.1 Free Demo Support
For the free demo version, support is provided on a best-effort basis: o Response Time: Within 1 business day during EU business hours o Channels: Email and documentation only o Scope: Deployment assistance and basic troubleshooting
10.2 Contact Information
Email: support@verbis-chat.com Documentation: https://docs.verbisgraph.com Website: https://verbisgraph.com
================================================================================ LEGAL NOTICE
This product is provided 'as is' under the Apache License 2.0. ProdigyAI Solutions makes no warranties regarding fitness for a particular purpose. AWS infrastructure costs are the customer's responsibility. This demo is intended for evaluation purposes only and should not be used with sensitive, regulated, or production data.
2025 ProdigyAI Solutions. All rights reserved. Verbis Graph is a trademark of ProdigyAI Solutions.
Additional details
Usage instructions
Quick start (Docker)
Pull the image (replace with your Marketplace image/tag if required): docker pull <MARKETPLACE_ECR_REPO>:<TAG>
Run the container: docker run --rm -p 8080:8080 <MARKETPLACE_ECR_REPO>:<TAG>
Open the UI: http://localhost:8080
Health check: curl -f http://localhost:8080/_stcore/health (expects HTTP 200)
AWS ECS Fargate (high level)
Create an ECS task definition using this image.
Container port: 8080/TCP
Recommended task size for demo: 1 vCPU / 2 GB RAM
Assign public IP (for direct access) or place behind an ALB.
If using an ALB, set target group health check path to: /_stcore/health.
Configuration
This demo requires no mandatory environment variables.
Default port is 8080. Logs go to stdout/stderr (view via docker logs or CloudWatch Logs on ECS).
The container runs as a non-root user.
Troubleshooting
If the UI is not reachable, confirm the port mapping/security group allows inbound TCP 8080.
If health checks fail on first start, allow a warm-up period (start-period ~60s recommended on ECS).
Teardown / cost control (ECS)
Scale ECS service desired tasks to 0 and delete the service to stop charges.
Remove associated load balancer/resources if created.
Support
Vendor support
Support for the Verbis Graph Engine Free Edition is provided via email and self-service resources. Email Support: support@verbis-chat.com Support Hours: 09:00 - 21:00 (EU time), Monday - Saturday Self-Service Support: 24/7 AI-powered chatbot available at https://verbisgraph.com Support is intended for general questions, onboarding guidance, and issue reporting related to the Free Edition. Response times are best-effort and no service-level agreements (SLAs) are provided for the free offering.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.