Listing Thumbnail

    Spice.ai Enterprise (BYOL)

     Info
    Sold by: Spice AI 
    Deployed on AWS
    Quick Launch
    Spice.ai Enterprise is a portable (<150MB) compute engine built in Rust for data-intensive and intelligent applications. Deployable as a container on AWS ECS, EKS, or hybrid cloud+edge, it includes Enterprise licensing, support, and SLA.

    Overview

    Open image

    Spice.ai Enterprise is a portable (<150MB) compute engine built in Rust for data-intensive and intelligent applications. It accelerates SQL queries across databases, data warehouses, and data lakes using Apache Arrow, DataFusion, DuckDB, or SQLite. Integrated and co-deployed with data-intensive applications, Spice materializes and accelerates data from object storage, ensuring sub-second query performance and resilient AI applications. Deployable as a container on AWS ECS, EKS, or hybrid cloud & edge, it includes enterprise licensing, support, and SLAs.

    Note: Spice.ai Enterprise requires an existing commercial license. For details, please contact sales@spice.ai .

    Highlights

    • Unified data query and AI engine accelerating SQL queries across databases, data warehouses, and data lakes. Delivers sub-second query performance while grounding mission-critical AI applications with real-time context to minimize errors and hallucinations.
    • Advanced AI and retrieval tools, featuring vector and hybrid search, text-to-SQL, and LLM memory, enabling data-grounded AI applications with more than 25 data connectors enabling federated queries and real-time applications.
    • Deployable as a container on AWS ECS, EKS, or on-premises, with dedicated support and SLAs for scalable, secure integration into any architecture.

    Details

    Delivery method

    Supported services

    Delivery option
    Container Deployment
    Helm Deployment

    Latest version

    Operating system
    Linux

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Quick Launch

    Leverage AWS CloudFormation templates to reduce the time and resources required to configure, deploy, and launch your software.

    Pricing

    Spice.ai Enterprise (BYOL)

     Info
    Pricing and entitlements for this product are managed through an external billing relationship between you and the vendor. You activate the product by supplying a license purchased outside of AWS Marketplace, while AWS provides the infrastructure required to launch the product. AWS Subscriptions have no end date and may be canceled any time. However, the cancellation won't affect the status of the external license.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Vendor refund policy

    Refunds for Spice.ai Enterprise container subscriptions are not available after activation, as usage begins immediately upon deployment. Ensure compatibility with AWS ECS, EKS, or on-premises setups before purchase. For billing inquiries, contact AWS Marketplace support or Spice AI directly at support@spice.ai  .

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Container Deployment

    Supported services: Learn more 
    • Amazon ECS
    • Amazon EKS
    • Amazon ECS Anywhere
    Container image

    Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.

    Version release notes

    Spice 1.6.0 upgrades DataFusion to v48, reducing expressions memory footprint by ~50% for faster planning and lower memory usage, eliminating unnecessary projections in queries, optimizing string functions like ascii and character_length for up to 3x speedup, and accelerating unbounded aggregate window functions by 5.6x. The release adds Kafka and MongoDB connectors for real-time streaming and NoSQL data acceleration, supports OpenAI Responses API for advanced model interactions including OpenAI-hosted tools like web_search and code_interpreter, improves the OpenAI Embeddings Connector with usage tier configuration for higher throughput via increased concurrent requests, introduces Model2Vec embeddings for ultra-low-latency encoding, and improves the Amazon S3 Vectors engine to support multi-column primary keys.

    What's New in v1.6.0-enterprise

    DataFusion v48 Highlights

    Spice.ai is built on the DataFusion query engine. The v48 release brings:

    Performance & Size Improvements: Expressions memory footprint was reduced by ~50% resulting in faster planning and lower memory usage, with planning times improved by 10-20%. There are now fewer unnecessary projections in queries. The string functions, ascii and character_length were optimized for improved performance, with character_length achieving up to 3x speedup. Queries with unbounded aggregate window functions have improved performance by 5.6 times via avoided unnecessary computation for constant results across partitions. The Expr struct size was reduced from 272 to 144 bytes.

    New Features & Enhancements: Support was added for ORDER BY ALL for easy ordering of all columns in a query.

    See the Apache DataFusion 48.0.0 Blog  for details.

    Runtime Highlights

    Amazon S3 Vectors Multi-Column Primary Keys: The Amazon S3 Vectors  engine now supports datasets with multi-column primary keys. This enables vector indexes for datasets where more than one column forms the primary key, such as those splitting documents into chunks for retrieval contexts. For multi-column keys, Spice serializes the keys using arrow-json format, storing them as single string keys in the vector index.

    Model2Vec Embeddings: Spice now supports model2vec static embeddings  with a new model2vec embeddings provider, for sentence transformers up to 500x faster and 15x smaller, enabling scenarios requiring low latency and high-throughput encoding.

    embeddings: - from: model2vec:minishlab/potion-base-8M # HuggingFace model name: potion - from: model2vec:path/to/my/local/model # local model name: local

    Learn more in the Model2Dev Embeddings documentation .

    Kafka Data Connector: Use from: kafka:<topic> to ingest data directly from Kafka topics for integration with existing Kafka-based event streaming infrastructure, providing real-time data acceleration and query without additional middleware.

    Example Spicepod.yml:

    - from: kafka:orders_events name: orders acceleration: enabled: true refresh_mode: append params: kafka_bootstrap_servers: server:9092

    Learn more in the Kafka Data Connector documentation .

    MongoDB Data Connector: Use from: mongodb:<dataset> to access and accelerate data stored in MongoDB, deployed on-premises or in the cloud.

    Example spicepod.yml:

    datasets: - from: mongodb:my_dataset name: my_dataset params: mongodb_host: localhost mongodb_db: my_database mongodb_user: my_user mongodb_pass: password

    Learn more in the MongoDB Data Connector documentation .

    OpenAI Responses API Support: The OpenAI Responses API (/v1/responses) is now supported, which is OpenAI's most advanced interface for generating model responses.

    You can now make requests to any responses compatible model using the new /v1/responses endpoint.

    Example curl request:

    curl <http://localhost:8090/v1/responses> \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4.1", "input": "Tell me a three sentence bedtime story about Spice AI." }'

    To use responses in spice chat, use the --responses flag.

    Example:

    spice chat --responses # Use the `/v1/responses` endpoint for all completions instead of `/v1/chat/completions`

    Use OpenAI-hosted tools  supported by Open AI's Responses API by specifying the openai_responses_tools parameter:

    Example spicepod.yml:

    models: - name: test from: openai:gpt-4.1 params: openai_api_key: ${ secrets:SPICE_OPENAI_API_KEY } tools: sql, list_datasets openai_responses_tools: web_search, code_interpreter # 'code_interpreter' or 'web_search'

    These OpenAI-specific tools are only available from the /v1/responses endpoint. Any other tools specified via the tools parameter are available from both the /v1/chat/completions and /v1/responses endpoints.

    Learn more in the OpenAI Model Provider documentation .

    OpenAI Embeddings & Models Connectors Usage Tier: The OpenAI Embeddings and Models Connectors now supports specifying account usage tier for embeddings and model requests, improving the performance of generating text embeddings or calling models during dataset load and search by increasing concurrent requests.

    Example spicepod.yml:

    embeddings: - from: openai:text-embedding-3-small name: openai_embed params: openai_usage_tier: tier1

    By setting the usage tier to the matching usage tier for your OpenAI account, the Embeddings and Models Connector will increase the maximum number of concurrent requests to match the specified tier.

    Learn more in the OpenAI Model Provider documentation .

    Contributors

    New Contributors

    Breaking Changes

    No breaking changes.

    Cookbook Updates

    The Spice Cookbook  includes 77 recipes to help you get started with Spice quickly and easily.

    Upgrading

    To upgrade to v1.6.0, use one of the following methods:

    CLI:

    spice upgrade

    Homebrew:

    brew upgrade spiceai/spiceai/spice

    Docker:

    Pull the spiceai/spiceai:1.6.0 image:

    docker pull spiceai/spiceai:1.6.0

    For available tags, see DockerHub .

    Helm:

    helm repo update helm upgrade spiceai spiceai/spiceai

    What's Changed

    Dependencies

    Additional details

    Usage instructions

    Prerequisites

    Ensure the following tools and resources are ready before starting:

    • Docker: Install from https://docs.docker.com/get-docker/ .
    • AWS CLI: Install from https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html .
    • AWS ECR Access: Authenticate to the AWS Marketplace registry: aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com
    • Spicepod Configuration: Prepare a spicepod.yaml file in your working directory. A spicepod is a YAML manifest file that configures which components (i.e. datasets) are loaded. Refer to https://spiceai.org/docs/getting-started/spicepods  for details.
    • AWS ECS Prerequisites (for ECS deployment): An ECS cluster (Fargate or EC2) configured in your AWS account. An IAM role for ECS task execution (e.g., ecsTaskExecutionRole) with permissions for ECR, CloudWatch, and other required services. A VPC with subnets and a security group allowing inbound traffic on ports 8090 (HTTP) and 50051 (Flight).

    Running the Container

    1. Ensure the spicepod.yaml is in the current directory (e.g., ./spicepod.yaml).
    2. Launch the container, mounting the current directory to /app and exposing HTTP and Flight endpoints externally:

    docker run --name spiceai-enterprise
    -v $(pwd):/app
    -p 50051:50051
    -p 8090:8090
    709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.6.0-enterprise-models
    --http 0.0.0.0:8090
    --flight 0.0.0.0:50051

    • The -v $(pwd):/app mounts the current directory to /app, where spicepod.yaml is expected.
    • The --http and --flight flags set endpoints to listen on 0.0.0.0, allowing external access (default is 127.0.0.1).
    • Ports 8090 (HTTP) and 50051 (Flight) are mapped for external access.

    Verify and Monitor the Container

    1. Confirm the container is running:

    docker ps

    Look for spiceai-enterprise with a STATUS of Up.

    1. Inspect logs for troubleshooting:

    docker logs spiceai-enterprise

    Deploying to AWS ECS Create an ECS Task Definition and use this value for the image: 709825985650.dkr.ecr.us-east-1.amazonaws.com/spice-ai/spiceai-enterprise-byol:1.6.0-enterprise-models. Configure the port mappings for the HTTP and Flight ports (i.e. 8090 and 50051).

    Override the command to expose the HTTP and Flight ports publically and link to the Spicepod configuration hosted on S3:

    "command": [ "--http", "0.0.0.0:8090", "--flight", "0.0.0.0:50051", "s3://your_bucket/path/to/spicepod.yaml" ]

    Register the task definition in your AWS account, i.e. aws ecs register-task-definition --cli-input-json file://spiceai-task-definition.json --region us-east-1

    Then run the task as you normally would in ECS.

    Resources

    Vendor resources

    Support

    Vendor support

    Spice.ai Enterprise includes 24/7 dedicated support with a dedicated Slack/Team channel, priority email and ticketing, ensuring critical issues are addressed per the Enterprise SLA.

    Detailed enterprise support information is available in the Support Policy & SLA document provided at onboarding.

    For general support, please email support@spice.ai .

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.