Skip to main content
2025

Reducing training time and costs by 50% using AWS Trainium with Splash Music

Splash Music is creating a new music format and reducing training costs by 54 percent by building HummingLM on AWS Trainium.

Benefits

54%
reduction in training costs
50%
reduction in training time
30,000+
music creations and >730 million streams

Overview

Music and gaming company Splash Music (Splash) believes that AI-generated music should incorporate human creativity, not replace it. That’s why the company is altering the industry with HummingLM, an AI-powered music generator that makes it possible for anybody to collaborate and create songs by humming a tune. As a small company with a big mission, Splash needed to train and scale its audio-to-audio models while minimizing costs and accelerating training iterations.

Less than a month after migrating to Amazon Web Services (AWS), Splash cut model training time and costs by more than half, improved training times, and unlocked rapid scaling that fueled creators to make more than 30,000 new songs and generate more than 730 million streams and counting.

Missing alt text value

About Splash Music

Splash Music is a music and gaming company that uses generative AI to open up new ways for music to be made and shared.

Opportunity | Using AWS Trainium and SageMaker HyperPod to accelerate model training

Founded in Brisbane, Australia, Splash began with experiments in neural synthesis—which uses deep neural networks to generate sounds—to develop models that turn a simple hum into music. That concept became HummingLM. Unlike other AI music companies that rely on text-to-audio, HummingLM lets users create music with their own melodies, lyrics, and styles. Instead of trying to describe music in words or notes, users can make it directly with their own voices. (See figure 1 below.)

Before adopting AWS, the Splash team had to juggle multiple systems, slowing experiments for its lean team. Splash needed faster training, more cost-effective inference, and less complex infrastructure. “We wanted the robustness of AWS compute instances, but also a single solution for hosting, APIs, deployment, and security. AWS gave us that,” says Prabhjeet Ghuman, principal architect at Splash. “We wanted to spend less time managing infrastructure and more time building features.”

Splash turned to AWS because the Splash team was familiar with its benefits and wanted to explore them further. The team migrated HummingLM to AWS Trainium, a family of AI chips purpose-built by AWS for training and inference. In addition to AWS AI chips, Splash also used Amazon SageMaker HyperPod, which removes the undifferentiated heavy lifting involved in building generative AI models, to orchestrate and distribute training across AWS Trainium nodes. “We’re a small team trying to do something massive,” says Tracy Chan, CEO at Splash. “Using AWS AI chips and SageMaker HyperPod helped us build new AI models and scale our business.”

Solution | Streamlining training and deployment on AWS

Splash joined the AWS Generative AI Accelerator program, an 8-week hybrid program that supports generative and agentic AI startups. This connected Splash to the AWS Generative AI Innovation Center, a team of AWS data science and generative AI experts. With guidance from AWS, Splash focused on building feasible, high-impact features. “In AI, problems can be open-ended, taking years to find a solution if they aren’t directed,” says Randeep Bhatia, CTO at Splash. “The AWS Generative AI Innovation Center team helped us focus on features and solutions that had the most impact for our users.”

Migrating to AWS Trainium delivered equivalent model accuracy with faster training iterations at a lower cost. Splash also migrated 2 PB of data from another cloud provider to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The transfer was completed in 1 week—an impressive feat given the large amount of data.

Splash’s music creation app now runs on AWS Amplify, which brings the power and breadth of AWS services to a familiar frontend developer experience. For model training, SageMaker HyperPod provides fully managed orchestration, while Amazon FSx for Lustre delivers high-performance, cost-effective, and scalable storage. This results in low-latency and high-throughput access to training data and checkpoints.

For inference, Splash deploys trained models from Amazon S3 onto Amazon Elastic Compute Cloud (Amazon EC2) Inf2 Instances, which deliver three times higher compute performance, four times larger total accelerator memory, up to four times higher throughput, and up to 10 times lower latency. Together, these services reduced operational overhead and helped Splash’s team focus on rapid feature delivery. (See figure 2 below.)

Outcome | Influencing the future of music

With user engagement surging to create over 30,000 new songs and driving over 730 million streams and counting, Splash is making a real-world impact beyond just the AI music space to all of music, from creation to listening. “Our top five creators are getting so many streams that they would be in the global top 100 if measured against mainstream music-streaming platforms,” says Chan. And behind the spike in engagement, training costs for Splash’s generative music models dropped 54 percent and training time fell 50 percent because of the burst capabilities that helped the team to accelerate training and efficiently reuse the same hardware for lower-capacity inference.

Splash plans to enhance HummingLM with features like timbre shaping, stem separation, and auto-tuning to expand users’ creative freedom. HummingLM’s audio-to-audio approach is also the subject of a coming academic paper, highlighting its influence beyond the music industry.

Through HummingLM, Splash is reshaping music by empowering anyone to connect, collaborate, create, and discover music. It has also opened new pathways for fans to interact and enjoy their favorite musicians’ art through platforms like Roblox. “In essence, we’re creating a new music format that will change how everyone thinks about music, which is really, really powerful,” says Chan.

Figure 1.

Visual diagram showcasing how Splash Music’s foundation model, HummingLM, is trained and inferenced to generate high-quality music

Figure 2.

To accelerate model training, Splash Music uses AWS Trainium nodes while relying on the resilience provided by Amazon SageMaker Hyperpod Amazon EKS Cluster

Missing alt text value
We’re a small team trying to do something massive. Using AWS AI chips and SageMaker HyperPod helped us build new AI models and scale our business.

Tracy Chan

CEO

Did you find what you were looking for today?

Let us know so we can improve the quality of the content on our pages