Listing Thumbnail

    LiteLLM LLM Gateway - Self Hosted (requires Private Offer)

     Info
    Sold by: LiteLLM 
    To purchase LiteLLM Enterprise Self-Hosted, please reach out to sales@berri.ai for a Private Offer. LiteLLM is an OpenAI compatible Proxy Server (LLM Gateway) to call 2,000+ LLM APIs using the OpenAI format Bedrock, Huggingface, VertexAI, TogetherAI, Azure OpenAI, OpenAI, etc. Get started with Opensource LiteLLM here: https://github.com/BerriAI/litellm (35,000+ Github Stars)

    Overview

    With LiteLLM Proxy Server Self-Hosted, you'll get access to a Proxy Server to call 2,000+ LLMs in a unified interface where you'll be able to track spend, set budgets per virtual key and users.

    You'll be able to set budgets & rate limits per project, API key, and model on OpenAI Proxy Servers.

    You can also translate inputs to the provider's completion, embedding, and image_generation endpoints as well as retry/fallback logic across multiple LLM deployments.

    Highlights

    • Contact sales@berri.ai for a Private Offer to purchase. Call 2,000+ LLM APIs in OpenAI format https://models.litellm.ai/
    • Track metrics on Prometheus and send logs to s3, GCS Bucket, Langfuse, OpenTelemetry, Datadog https://docs.litellm.ai/docs/proxy/logging
    • Get started with Opensource LiteLLM here: https://github.com/BerriAI/litellm (33,000+ Github Stars)

    Details

    Sold by

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    LiteLLM LLM Gateway - Self Hosted (requires Private Offer)

     Info
    Pricing is based on the duration and terms of your contract with the vendor. This entitles you to a specified quantity of use for the contract duration. If you choose not to renew or replace your contract before it ends, access to these entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    12-month contract (1)

     Info
    Dimension
    Description
    Cost/12 months
    LiteLLM Enterprise
    All features under LiteLLM Enterprise License, SSO sign on, Feature Prioritization, Proffesional Support, Custom SLAs
    $0.01

    Vendor refund policy

    All fees are non-cancellable and non-refundable except as required by law.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Resources

    Support

    Vendor support

    For dedicated support, please send an email to support@berri.ai 

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    10
    In AIOps
    Top
    25
    In AIOps
    Top
    25
    In AIOps

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews on AWS and G2
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    12 reviews
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    Multi-Provider LLM Integration
    Supports calling 2,000+ LLM APIs including Bedrock, Huggingface, VertexAI, TogetherAI, Azure OpenAI, and OpenAI through a unified OpenAI-compatible interface
    Cost and Rate Management
    Enables setting budgets and rate limits per project, API key, and model with spend tracking capabilities per virtual key and user
    Observability and Logging
    Integrates with Prometheus for metrics tracking and supports log forwarding to S3, GCS Bucket, Langfuse, OpenTelemetry, and Datadog
    API Endpoint Translation
    Translates inputs to provider-specific completion, embedding, and image_generation endpoints with automatic request formatting
    Resilience and Failover
    Implements retry and fallback logic across multiple LLM deployments for improved reliability
    Prompt Engineering and Comparison
    Side-by-side comparisons between multiple prompts, parameters, models, and model providers across test cases for optimization.
    Workflow Orchestration
    Ability to prototype and deploy AI workflows that chain business logic, data, APIs, and dynamic prompts for various use cases.
    Evaluation and Testing Framework
    Creation of test case banks to evaluate and identify optimal prompt and model combinations across multiple scenarios.
    Semantic Search and Retrieval
    Document retrieval capability to extract company-specific data and use it as context in LLM calls.
    Monitoring and Proxy Infrastructure
    Reliable proxy layer connecting applications to model providers with request tracking for debugging and quality monitoring.
    AI Gateway and Request Routing
    AI Gateway with load balancing capabilities between multiple language models and fallback mechanisms for production reliability
    Observability and Monitoring
    Full visibility and observability over AI applications with comprehensive logging and monitoring through a centralized dashboard
    Prompt Management
    Centralized prompt management system for versioning, organization, and control of prompts across AI applications
    Security and Compliance
    Enterprise-grade security with ISO, SOC2, HIPAA, and GDPR compliance certifications and security policy enforcement
    Guardrails and A/B Testing
    Built-in guardrails for output validation and A/B testing capabilities for comparing model performance and behavior

    Contract

     Info
    Standard contract

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.