Fireworks AI Delivers Blazing Fast Generative AI with NVIDIA and AWS
Benefits
Gained
access to the most powerful NVIDIA GPUs with Amazon EC2 instances20X
higher performance over other generative AI providersDelivered
up to 4X lower latency for Fireworks AI customersOverview
About Fireworks AI
Fireworks AI offers a generative AI platform that enables product developers to run state-of-the-art, open-source models with the best speed, quality, and scalability.
Opportunity | Providing Performance and Quality for Every Generative AI Workload
With the emergence of generative AI, businesses have a wealth of new opportunities to utilize it. For example, generative AI can transform customer experiences by developing beautiful images or engaging in complex conversations. However, the generative AI models that power these experiences are extremely large and it’s difficult for businesses to serve and scale these models, especially given high user expectations for latency and quality. Waiting several seconds for an image to generate or a chat bot to respond can lead to a frustrating user experience that’s untenable for many use cases.
The founding team at Fireworks AI noticed these challenges through their work with PyTorch—the deep learning framework that the latest generative AI models are developed on. Using their experience from bringing PyTorch to life, the Fireworks AI team developed software that provides an easy-to-use API to run customized models with the best performance. However, Fireworks AI needed to ensure the hardware it used would support exceptionally fast inference.
About AWS Partner NVIDIA
Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI, and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.
Solution | Taking Off with NVIDIA Chips
AWS Partner NVIDIA delivered the powerful GPUs that Fireworks AI needed to take off. “NVIDIA is the best GPU and high-performance kernel provider in the world,” said Lin Qiao, chief executive officer and co-founder at Fireworks AI. “Access to advanced GPUs through Amazon EC2 has been fantastic. We reliably get accelerated computing that helps us stay ahead of the game.” Fireworks AI serves on top of both NVIDIA A100 and H100 Tensor Core GPUs and has built its own kernel on top of NVIDIA’s libraries.
The platform also uses Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Simple Storage Service (Amazon S3). Amazon EKS offers an optimized image that includes configured NVIDIA drivers for GPU-enabled instances of Amazon Elastic Compute Cloud (Amazon EC2), making it easy to run GPU-powered workloads. For Fireworks AI, the Kubernetes tier allows the team to orchestrate services across various machines. “Because Amazon Web Services (AWS) has battle-tested Amazon EKS, we can focus on our product development,” said Dmytro Dzhulgakov, chief technical officer at Fireworks AI.
With NVIDIA GPUs running on AWS, Fireworks AI can deliver customers a high-performance inference service. In fact, the H100 GPU provides up to 20X higher performance over the prior generation. It can also be partitioned into seven GPU instances using NVIDIA’s multi-instance GPU technology to dynamically adjust to shifting demands. “As we continue to optimize for performance, NVIDIA H100s are key because they accelerate serving speed greatly,” Qiao said.
Outcome | Lowering Latency by 4X
In addition to high quality inference, Fireworks AI also delivers four times lower latency than other popular open-source large language model (LLM) engines like vLLM. “Fireworks AI works through the entire stack—from inference serving orchestration, to PyTorch runtime optimization and low-level kernel optimization, to device, CPU, and memory bandwidth optimization,” Qiao said. The result is a generative AI platform that enables both fast and high-quality inference, so that users can have the best possible experience with new generative AI products.
The Fireworks AI team continues to expand its partnership with NVIDIA to build out the next evolution of its serving tier. “We’re excited about the latest generation of GPUs from NVIDIA and AWS because of the higher memory bandwidth and computational power they provide,” Qiao said. Advancements in chip technology will directly impact the performance that Fireworks AI delivers to its customers.
We’re excited about the latest generation of GPUs from NVIDIA and AWS because of the higher memory bandwidth and computational power they provide.
Lin Qiao
CEO and Co-founder, Fireworks AIAWS Services Used
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages