Listing Thumbnail

    Open Platform for Enterprise AI - OPEA

     Info
    Sold by: Intel 
    OPEA (Open Platform for Enterprise AI) is an AI inferencing and finetuning microservice framework that enables the creation and evaluation of open, configurable, and composable generative AI solutions.
    Listing Thumbnail

    Open Platform for Enterprise AI - OPEA

     Info
    Sold by: Intel 

    Overview

    The OPEA platform includes:

    Detailed framework of composable microservices building blocks for state-of-the-art GenAI systems including LLMs, data stores, and prompt engines

    Architectural blueprints of retrieval-augmented GenAI component stack structure and end-to-end workflows

    Multiple micro- and megaservices to get your GenAI into production and deployed

    A four-step assessment for grading GenAI systems around performance, features, trustworthiness and enterprise-grade readiness.

    Highlights

    • Leverage Open Source Ecosystem for GenAI solution
    • Easy to use with composable and configurable AI microservices
    • Hardware-agnostic architecture (supports Intel, AMD and nVidia Hardware)

    Details

    Sold by

    Delivery method

    Delivery option
    Opea ChatQnA on Amazon EKS

    Latest version

    Operating system
    Linux

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Open Platform for Enterprise AI - OPEA

     Info
    This product is free. Subscriptions have no end date and can be canceled anytime.

    Vendor refund policy

    All fees are non-cancellable and non-refundable except as required by law.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Opea ChatQnA on Amazon EKS

    Supported services: Learn more 
    • Amazon EKS
    • Amazon EKS Anywhere
    Container image

    Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.

    Version release notes

    Remove requirement to supply value for OPEA_ROLE_NAME and OPEA_USER

    Additional details

    Usage instructions

    1. Run "aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 709825985650.dkr.ecr.us-east-1.amazonaws.com"

    2. Run "docker pull 709825985650.dkr.ecr.us-east-1.amazonaws.com/intel/opea-eks-builder"

    3. Place a file called "opea.env" in the directory that you are issuing commands from. Add the following required parameters in this format:

    AWS_REGION=<your region> AWS_ACCESS_KEY_ID=<your id> AWS_SECRET_ACCESS_KEY=<your secret>" AWS_SESSION_TOKEN=<your token (if assuming a role)> HUGGING_FACE_TOKEN=<your token>

    *NOTE: There are sensitive values in this file that should not be shared. Do not copy this file beyond your local drive.

    1. (OPTIONAL) Include values in "opea.env" from the Optional Parameters and EKS Cluster sections below as needed

    2. Run the command "docker run --env-file opea.env intel/opea-eks-builder:latest"

    REQUIRED PARAMETERS

    • AWS_REGION - The region you intend to deploy the cluster into

    • AWS_ACCESS_KEY_ID - The access key for the user or role you'll be assuming in the AWS account.

    • AWS_SECRET_ACCESS_KEY - The secret for the user of role you'll be assuming in the AWS account. If you're assuming a role, you'll also need to set the AWS_SESSION_TOKEN parameter.

    *WARNING: Make sure to protect your Access Key and secret values as they are highly sensitive. Be careful not to expose them.

    • HUGGING_FACE_TOKEN - A valid token for Hugging Face scoped to use the models in your package. If you're using "guardrails", be sure to add the "meta-llama/Meta-Llama-Guard-2-8B" model to your token scope.

    AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN make up your AWS credentials and are required so that you can programmatically authenticate into your AWS account. Once you're authenticated the container image can deploy OPEA ChatQnA into your account on your behalf.

    A HUGGING_FACE_TOKEN can be aquired by creating a free account at https://huggingface.co . This will give you access to Hugging Face LLM's needed for OPEA ChatQnA to function.

    OPTIONAL PARAMETERS

    OPEA_MODULE - If you deploy the package without setting this value, the layout will look like this:

    1. Generative AI chatbot user interface
    2. Large Language Models (LLM): TGI (Hugging Face; Intel/neural-chat-7b-v3-3)
    3. Embedding Models: TEI (Hugging Face; BAAI/bge-base-en-v1.5)
    4. Vector Database: (Opensearch)
    5. Server (Nginx)

    Use the OPEA_MODULE parameter to substitute the defaults with the following replacements:

    1. "bedrock" - Amazon Bedrock LLM's. The default is Anthropic Claude 3 Haiku, but you can change the model by setting the LLM_MODEL environment variable.
    2. "guardrails" - Monitor the content allowed through the model.
    3. "redis" - Use redis as your vector DB
    • INSTANCE_TYPE - Supported instance types include any size of the "M7I", "C7I", or "R7I" instances.

    • DISK_SIZE - The amount of space (in GB) allotted to nodes in the cluster. We recommend using a minimum of 100.

    THE EKS CLUSTER

    You can customize the configuration of the EKS cluster that is created, or you can use an existing EKS cluster by passing in the CLUSTER_NAME parameter.

    If you choose to bring your own cluster, the settings of the cluster must support the OPEA platform requirements in order to work properly. Also make sure to pass the value for "kubectlRoleArn" so the cluster can run "kubectl" commands.

    Support

    Vendor support

    OPEA is an open source initiative. no support will be offered through OPEA project. but Enterprise who use OPEA to build commercial GenAI Solution will offer relevant support to final users.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to write a review for this product.