Listing Thumbnail

    Fireworks

     Info
    Deployed on AWS
    Fireworks.ai offers a generative AI platform as a service. We optimize for rapid product iteration building on top of gen AI as well as minimizing cost to serve.
    4.2

    Overview

    Experience the fastest inference and fine-tuning platform with Fireworks AI. Utilize state-of-the-art open-source models, fine-tune them, or deploy your own at no additional cost. Access a diverse library of models across various modalities - including text, vision, embedding, audio, image, and multimodal - to build and scale your AI applications efficiently.

    • Blazing fast inference for 100+ models
    • Fine-tune and deploy in minutes
    • Building blocks for compound AI systems

    Start in seconds and pay-per-token with our serverless deployment. Or Use our dedicated deployments, fully optimized to your use case.

    Highlights

    • Instantly run popular and specialized models, including DeepSeek R1, Llama3, Mixtral, and Stable Diffusion, optimized for peak latency, throughput, and context length. Fireattention custom CUDA kernel, serves models four times faster than vLLM without compromising quality.
    • Fine-tune with our LoRA-based service, twice as cost-efficient as other providers. Instantly deploy and switch between up to 100 fine-tuned models to experiment without extra costs. Serve models at blazing-fast speeds of up to 300 tokens per second on our serverless inference platform.
    • Leverage the building blocks for compound AI systems. Handle tasks with multiple models, modalities, and external APIs and data instead of relying on a single model. Use FireFunction, a SOTA function calling model, to compose compound AI systems for RAG, search, and domain-expert copilots for automation, code, math, medicine, and more.

    Details

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Pricing is based on the duration and terms of your contract with the vendor, and additional usage. You pay upfront or in installments according to your contract terms with the vendor. This entitles you to a specified quantity of use for the contract duration. Usage-based pricing is in effect for overages or additional usage not covered in the contract. These charges are applied on top of the contract price. If you choose not to renew or replace your contract before the contract end date, access to your entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    12-month contract (1)

     Info
    Dimension
    Description
    Cost/12 months
    Enterprise
    Unlimited deployment models
    $500,000.00

    Additional usage costs (1)

     Info

    The following dimensions are not included in the contract terms, which will be charged based on your usage.

    Dimension
    Description
    Cost/unit
    additionalusage
    Additional Usage
    $1.00

    Vendor refund policy

    All fees are non-refundable and non-cancellable except as required by law.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Support

    Vendor support

    Email support services are available from Monday to Friday.
    support@fireworks.ai 

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly

    Accolades

     Info
    Top
    10
    In Finance & Accounting, Research
    Top
    10
    In Summarization-Text, Generation-Text
    Top
    10
    In Procurement & Supply Chain

    Overview

     Info
    AI generated from product descriptions
    High-Performance Inference Optimization
    Fireattention custom CUDA kernel serves models four times faster than vLLM, achieving inference speeds up to 300 tokens per second on serverless infrastructure.
    Cost-Efficient Fine-Tuning
    LoRA-based fine-tuning service that is twice as cost-efficient as other providers, with ability to deploy and switch between up to 100 fine-tuned models without additional costs.
    Multi-Modal Model Library
    Access to diverse library of 100+ models across multiple modalities including text, vision, embedding, audio, image, and multimodal capabilities.
    Compound AI System Architecture
    FireFunction SOTA function calling model enables composition of compound AI systems supporting multiple models, modalities, and external APIs for RAG, search, and domain-specific applications.
    Flexible Deployment Options
    Serverless pay-per-token deployment model or dedicated deployments fully optimized to specific use cases, with support for popular models including DeepSeek R1, Llama3, Mixtral, and Stable Diffusion.
    Model Quantization Support
    Support for 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization enabling inference on GPUs with 16GB or 24GB memory
    Inference Engine
    Llama.cpp inference with plain C/C++ implementation without dependencies, supporting interactive and server mode operations
    GPU and CPU Hybrid Processing
    Capability to run inference simultaneously on GPU and CPU, allowing execution of larger models when GPU memory is insufficient
    Multi-framework Support
    Integration with llama-cpp-python for OpenAI API compatibility, Open Interpreter for code execution, and Tabby coding assistant for IDE integration
    No-Code Application Development
    Visual interface with built-in connectors and large language models enabling generative AI application deployment without coding requirements.
    Multi-Model Support and Comparison
    Access to latest large language models with prompt playground functionality for model comparison and evaluation across different LLM options.
    Enterprise Security and Governance
    Secure credentials management, personally identifiable information masking, data encryption, and role-based access controls for enterprise-level compliance.
    Observability and Cost Management
    Operational dashboards providing visibility into model spending, performance metrics, usage patterns, and trends for cost tracking and optimization.
    Trust and Safety Controls
    Content filtering mechanisms to reduce noise, block harmful content, and include relevant citations with ground truth comparison capabilities using LLM as a judge.

    Contract

     Info
    Standard contract

    Customer reviews

    Ratings and reviews

     Info
    4.2
    3 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    67%
    0%
    33%
    0%
    0%
    0 AWS reviews
    |
    3 external reviews
    External reviews are from G2  and PeerSpot .
    Liraz A.

    One Stop AI Model Shop

    Reviewed on Nov 14, 2024
    Review provided by G2
    What do you like best about the product?
    So many AI models to choos from... Love the option of the playground
    What do you dislike about the product?
    pretty hard to get started. they really need a quickstart guide.
    and beacuse the site is so full of featurs - a tour would be nice.
    What problems is the product solving and how is that benefiting you?
    helping me choose the right model for my day to day use.
    reviewer2588646

    Enhanced text-to-image creation with solid API and fine-tuning support

    Reviewed on Nov 06, 2024
    Review provided by PeerSpot

    What is our primary use case?

    We primarily use Fireworks AI  for text-to-image generation. We are developing a platform for artists to sell their art styles, where the system helps them tune a model and then sell images generated from their signature.

    How has it helped my organization?

    Fireworks AI  has helped our organization by enabling us to create a platform for artists to sell their art styles. I am not the user of the solution. I'm the developer. It helps me do my job effectively.

    What is most valuable?

    Fireworks AI has a solid API and is quite easy to interact with. It has better documentation and logs, which are important for me as a developer. Additionally, it has a bigger infrastructure and provides nice support for fine-tuning the Flux  AI model.

    What needs improvement?

    Returning the values charged for each event generation would improve Fireworks AI. When using the API, it does not return information about the charges for image generation, which would be useful for our solution.

    For how long have I used the solution?

    I have been using Fireworks AI for about four months.

    What do I think about the stability of the solution?

    Fireworks AI is pretty stable, and I have not encountered any problems.

    What do I think about the scalability of the solution?

    Fireworks AI offers a very complete API, and its scalability is impressive.

    Which solution did I use previously and why did I switch?

    I previously used Okta. It was discontinued, so we opted for Fireworks AI.

    How was the initial setup?

    The initial setup was fairly easy. It took about eight to ten days, including integrating it into our solution, testing, and moving from scratch to production.

    What's my experience with pricing, setup cost, and licensing?

    I cannot comment on pricing or setup cost since others handle that aspect. As a developer, I primarily use the API.

    Which other solutions did I evaluate?

    I have evaluated SAL as an alternative solution.

    What other advice do I have?

    I'd rate the solution ten out of ten.

    Pratiksh S.

    Review for Fireworks AI

    Reviewed on Sep 05, 2024
    Review provided by G2
    What do you like best about the product?
    They have categorised the models according to users requirements and user have to pay for the products they use. No extra costing.
    What do you dislike about the product?
    They need to use more dependable parameters. And should increase their serverless model limits.
    What problems is the product solving and how is that benefiting you?
    AI is the booming condition in the industry and with the Fireworks it feels easy to deploy models to organisational servers. Additionally they use Meta Llama.
    View all reviews