AWS for Games Blog
How Code Wizards load tested Heroic Labs’ Nakama to two million concurrent players with AWS
Many game developers struggle to build game backend systems that can scale for large numbers of players, especially during events like game launches where there can be very high and unpredictable amounts of traffic. Nakama, a game backend service from the company Heroic Labs (an AWS partner), aims to solve this problem. The Nakama platform runs in the cloud and automatically scales up or down to handle even the biggest multiplayer games.
Before using any backend solution, it’s critical that it has been tested to validate how well it can handle large amounts of traffic and players. This is called load testing. Developers simulate high-traffic conditions and see if the system can cope with the expected number of players. This allows them to identify any bottlenecks or issues before the service is used for an actual game release.
This blog post investigates how Code Wizards used AWS to load test Heroic Labs’ Nakama on Heroic Cloud—the fully managed version of Nakama running on AWS—to two million concurrently connected users (CCU) across a variety of use cases.
Why use a game backend-as-a-service?
Modern games often require a large amount of backend infrastructure to support features such as authentication, multiplayer capabilities, community-building tools, real-time chat, and online leaderboards. While game developers are technically capable, focusing on the undifferentiated heavy lifting of implementing, supporting, and scaling backend infrastructure can take away from the core business of simply making a game that is fun to play.
This is where a backend-as-a-service, or BaaS, helps. By providing out-of-the-box support for common gaming features, a BaaS allows game developers to take advantage of ready-built functionality by simply implementing an SDK or calling an API. Heroic Labs’ Nakama is a BaaS that has been used by game developers worldwide, supporting games across a variety of platforms and game engines.
Load testing Nakama using AWS
To validate the scalability and performance of Nakama on Heroic Cloud, Heroic Labs partnered with Code Wizards—an AWS partner specializing in developing and supporting cloud solutions for gaming companies—to conduct a large scale load test using a range of AWS services.
Code Wizards and Heroic Cloud built the solution shown in this high-level architecture:
The key AWS services involved are:
- AWS Fargate – Serverless compute engine for containerized workloads. Used to run Nakama clients which simulate players for the load test.
- Amazon Aurora – High-performance, fully-managed, global scale relational database. Used to provide all database backed operations for Nakama on Heroic Cloud, including user accounts, authentication, wallet transactions and player inventory.
- Application Load Balancer (ALB) – Managed, elastically scalable load balancer which automatically distributes incoming application traffic across multiple targets. Used as the ingress point for simulated player traffic.
- Amazon Elastic Kubernetes Service (Amazon EKS) – Managed Kubernetes service for running containerized Kubernetes workloads on AWS. Used to run the core Nakama platform nodes which receive and process the simulated player traffic.
- Amazon Simple Storage Service (Amazon S3) – Highly scalable, cloud-based object storage service. Used to store the test bundles or scenarios.
Load testing to two million concurrent users
Code Wizards used the open source load testing tool, Artillery, to generate virtual players to simulate load. Each virtual player executes a scenario, which is a sequence of actions, such as making a HTTP GET, or HTTP POST request, or establishing and sending data over a WebSocket connection.
To scale up to two million simulated players, Artillery orchestrated a cluster of Fargate nodes, each of which ran a number of containers consisting of the virtual players. By scaling up to 25,000 Fargate nodes over approximately 50 minutes, over two million virtual players were created. Because Fargate is a serverless service, there was no need for any management of the underlying server infrastructure. This also enabled Heroic to scale everything back to zero once complete, avoiding unnecessary cost.
Once scaled up, the following scenarios were tested against the Nakama platform:
- Basic stability at scale: This scenario focused on establishing a baseline for Nakama’s ability to handle large numbers of concurrent connections. Each of the two million simulated players performed the following common actions:
- Authentication
- New account creation
- Receive session token
- Establish a WebSocket connection
- Real-time throughput: This scenario added a more intensive real-time messaging workload, with simulated players joining chat channels and sending messages at a high rate. Besides the common actions from scenario 1, the players performed one of the following actions every 10 – 20 seconds for four hours:
- Joined a chat channel
- Sent a randomly generated 10 – 100-byte chat message
- Combined workload: This scenario aimed to test Nakama’s performance under a database-intensive workload, with simulated players performing actions that require database writes. In this scenario, players carried out the common actions from scenario 1, and additionally performed one of the following actions every 60 – 120 seconds for four hours:
- Spend some coins from a player wallet
- Add an item to the player inventory
To track performance, logs from Nakama were output to a time-series database before being sent to Grafana for visualization.
In each of the scenarios, the load testing environment scaled to over two million virtual players for several hours, with Nakama on Heroic Cloud successfully handling the load while operating within normal tolerances.
Conclusion
By leveraging highly scalable services like Fargate, Aurora, and EKS, Heroic Labs and Code Wizards pushed the limits of Nakama’s performance and demonstrated its suitability for even the largest-scale games. By using serverless and managed services, there was no overhead of managing underlying server instances, and only the resources required for the load test needed to be provisioned.
This meant that generating the load of two million virtual players for several hours was cost effective, and once complete, the infrastructure used could be scaled back to zero.
It should be noted that while this approach worked for Heroic Labs, each use case is different. If you want to run your own load test, it is important to identify the scenarios you intend to test, what the key performance metrics are, along with appropriate thresholds for those metrics. AWS offers prescriptive guidance to help you plan and execute a load test that works for your use case.
Nakama on Heroic Cloud is available via the AWS Marketplace. More detail on this load test can also be found in Heroic Labs own blog post.