Partner Success with AWS / Software & Internet / United States

May 2024
Fireworks AI
NVIDIA

Fireworks AI Delivers Blazing Fast Generative AI with NVIDIA and AWS

Gained

access to the most powerful NVIDIA GPUs with Amazon EC2 instances

20X

higher performance over other generative AI providers

Delivered

up to 4X lower latency for Fireworks AI customers

Overview

Fireworks AI delivers a fast, affordable, and customizable platform for developers to run and fine-tune generative artificial intelligence (AI) models at scale. To provide the most performant inference service for ultra-low-latency use cases, Fireworks AI elected to run on NVIDIA H100 and A100 Tensor Core GPUs through Amazon EC2 P4 and P5 instances. This enabled Fireworks AI to deliver up to 4X lower latency than previous solutions with zero compromise on model quality.

Providing Performance and Quality for Every Generative AI Workload

With the emergence of generative AI, businesses have a wealth of new opportunities to utilize it. For example, generative AI can transform customer experiences by developing beautiful images or engaging in complex conversations. However, the generative AI models that power these experiences are extremely large and it’s difficult for businesses to serve and scale these models, especially given high user expectations for latency and quality. Waiting several seconds for an image to generate or a chat bot to respond can lead to a frustrating user experience that’s untenable for many use cases.

The founding team at Fireworks AI noticed these challenges through their work with PyTorch—the deep learning framework that the latest generative AI models are developed on. Using their experience from bringing PyTorch to life, the Fireworks AI team developed software that provides an easy-to-use API to run customized models with the best performance. However, Fireworks AI needed to ensure the hardware it used would support exceptionally fast inference.

kr_quotemark

We’re excited about the latest generation of GPUs from NVIDIA and AWS because of the higher memory bandwidth and computational power they provide.”

Lin Qiao
CEO and Co-founder, Fireworks AI

Taking Off with NVIDIA Chips

AWS Partner NVIDIA delivered the powerful GPUs that Fireworks AI needed to take off. “NVIDIA is the best GPU and high-performance kernel provider in the world,” said Lin Qiao, chief executive officer and co-founder at Fireworks AI. “Access to advanced GPUs through Amazon EC2 has been fantastic. We reliably get accelerated computing that helps us stay ahead of the game.” Fireworks AI serves on top of both NVIDIA A100 and H100 Tensor Core GPUs and has built its own kernel on top of NVIDIA’s libraries.

The platform also uses Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Simple Storage Service (Amazon S3). Amazon EKS offers an optimized image that includes configured NVIDIA drivers for GPU-enabled instances of Amazon Elastic Compute Cloud (Amazon EC2), making it easy to run GPU-powered workloads. For Fireworks AI, the Kubernetes tier allows the team to orchestrate services across various machines. “Because Amazon Web Services (AWS) has battle-tested Amazon EKS, we can focus on our product development,” said Dmytro Dzhulgakov, chief technical officer at Fireworks AI.

Sparking Insights with High-Performance Inference

With NVIDIA GPUs running on AWS, Fireworks AI can deliver customers a high-performance inference service. In fact, the H100 GPU provides up to 20X higher performance over the prior generation. It can also be partitioned into seven GPU instances using NVIDIA’s multi-instance GPU technology to dynamically adjust to shifting demands. “As we continue to optimize for performance, NVIDIA H100s are key because they accelerate serving speed greatly,” Qiao said.

Lowering Latency by 4X

In addition to high quality inference, Fireworks AI also delivers four times lower latency than other popular open-source large language model (LLM) engines like vLLM. “Fireworks AI works through the entire stack—from inference serving orchestration, to PyTorch runtime optimization and low-level kernel optimization, to device, CPU, and memory bandwidth optimization,” Qiao said. The result is a generative AI platform that enables both fast and high-quality inference, so that users can have the best possible experience with new generative AI products.

Building on a New Generation of GPUs

The Fireworks AI team continues to expand its partnership with NVIDIA to build out the next evolution of its serving tier. “We’re excited about the latest generation of GPUs from NVIDIA and AWS because of the higher memory bandwidth and computational power they provide,” Qiao said. Advancements in chip technology will directly impact the performance that Fireworks AI delivers to its customers.

About Fireworks AI

Fireworks AI offers a generative AI platform that enables product developers to run state-of-the-art, open-source models with the best speed, quality, and scalability.

About AWS Partner NVIDIA

Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI, and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

AWS Services Used

Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 750 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.

Learn more »

Amazon EKS

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in AWS and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. 

Learn more »

Amazon S3

Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance.

Learn more »

More Software & Internet Success Stories

Showing results: 9-12
Total results: 153

no items found 

  • Software & Internet

    NeuralSpace Accelerates AI Model Training Speed by 96% in Migration to AWS with Rebura

    NeuralSpace, a London-based AI startup, had the same problem that many startups have: not enough time, not enough money, and too much to do. It needed to develop and train the AI models that powered its language AI applications—automatic translation of text and speech, automated subtitling, and automated AI dubbing of content—but these processes were taking too long. With 20–30 TB of data being used to train each model, it could take 3–6 months to train just one. And the company needed to train multiple models to develop its products. NeuralSpace knew that it needed to find a way to speed up model training that would fit within its limited budget. With the help of AWS Partner Rebura, NeuralSpace migrated to Amazon Web Services (AWS) to enable faster modeling and a crucial pivot in focus.

    2024
  • Software & Internet

    FloQast Uses Tackle ACE CRM Integration to Boost Win Rate by 26% and Cut Deal Cycle Time by 30%

    FloQast provides close management software for corporate accounting departments. Working with AWS Partner Tackle, the company wanted to move to a more strategic partnership with Amazon Web Services (AWS) by automating co-selling processes. To address its needs, FloQast deployed the Tackle ACE CRM integration, helping salespeople enter AWS opportunities into ACE directly from Salesforce. This streamlined process has helped FloQast boost its win rate by 26 percent and reduce the average deal cycle time by 30 percent.

    2024
  • Software & Internet

    Peak Defence Leverages Generative AI to Transform Cybersecurity Audits

    Peak Defence needed to scale up its processes to meet increasing demand for its cybersecurity consulting and solutions services. The company collaborated with AWS Partner Neurons Lab to automate critical security and compliance processes for customers using generative AI and added a Software as a Service (SaaS) offering to its portfolio. With help from the AI solution development experts at Neurons Lab, Peak Defence leveraged advanced, cloud-based AI tools and a scalable, serverless infrastructure to significantly improve operational efficiency. This empowers the company to handle growing demand while maintaining the strong data security measures its customers require.

    2024
  • Software & Internet

    Honeycomb Doubles AWS Opportunity Submissions in 11 Days with Clazar ACE CRM Integration

    Honeycomb provides an observability platform that helps software engineering teams determine why problems happen and who is impacted. The company sought to reduce manual processes to more effectively grow its AWS Marketplace business. Honeycomb worked with AWS Partner Clazar to implement the Clazar Salesforce ACE CRM integration, which automates data entry and integrates Salesforce and ACE data. As a result, Honeycomb doubled its opportunity submissions to AWS in 11 days and achieved a 100 percent opportunity approval rate from AWS. In addition, Honeycomb can scale to support 110 percent annual growth.

    2024
1 39

Get Started

Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.