AWS Startups Blog

Sensors, Sensors Everywhere: Building a Sports Tracking Phenomenon

Trace helps athletes auto-edit their video

Guest post by David Lokshin, co-founder, Trace

Trace helps athletes improve their performance and auto-edits their video. We do this by telling skiers, snowboarders, and surfers things like speed, vertical, airtime, number of runs or waves, jump distance and much more. Athletes can compare against their historical performance or anyone else on the platform. We also sync with cameras like GoPro, and auto-edit all your highlights. In this blog post, I’ll talk about how Trace got started and how we scale to meet seasonal demand with help from Amazon ElastiCache and Spot Instances.

The Original Idea

In late 2010, my father, Anatole Lokshin, and I had both quit our jobs and were consulting full time. Anatole had been CTO of Magellan Navigation for more than a decade, and we spent a lot of time thinking about what the “next GPS” would be. When we started Trace we didn’t have an idea for a product and didn’t exactly want to start a company. We had a view of what the future would look like. And in that future, inertial sensors, or inertial measurement units (IMUs), would be everywhere.

Our original idea was to use sensors to combine the real world and video games. You’d wear something on your body while doing physical activity, or maybe the data would even come from your phone. Then that data would somehow unlock something like a new level or tool inside of a video game. We spent more than a year trying to turn real-life movement into a 3D animation complete with avatars and Google Earth imagery.

In the process of doing that, we built a portal for skiers to track their performance and compare to one another. What we really wanted was the raw data, and building the portal was a way to entice ski bums to collect data for us. Two weeks into the portal going live, we got a call at 1:30 in the morning from a user informing us that data from the leaderboard was wrong. Before hanging up, he made us promise it would be fixed by the morning. In a year and a half of work on our video game project, no one had shown even a hint at this sort of interest. We changed gears the next morning.

Changing Gears

By the next ski season we had released a rough HTML 5 app for Android and iPhone that tracks snow sports. The app measured stats like vertical drop, speed, calories, number of runs, which lift you took up, and more. We grew to 70,000 users in three months.

A lot of this was luck. We sent out over a thousand emails to every ski club with an online presence, and some of these emails turned into introductions to ski resorts. These ski resorts adopted us as their de facto app and marketed for us. Social hooks inside the app then drove growth as well.

In the beginning, virtually every request to our website or app made a request directly to the database. We had a single server sitting in a colab, and things were getting really painful. Every week we were fidgeting with settings to try and eek out performance so that we wouldn’t have to retool anything midseason.

Once the snow started to melt, we decided to take a look at our architecture and make a decision on how to optimize moving forward. We weren’t going to survive another season like this. We started out by moving our dev environments to a few cloud services to see how we liked them. Immediately we saw why AWS was really starting to gain steam.

Moving to the AWS Cloud

For us, the best things about AWS are (1) the flexibility of services and (2) the number of options available to us as engineers. As Trace grew, we started to develop processes that had different requirements. Data processing and video processing are very CPU intensive, while leaderboards are very memory intensive. Enter compute-optimized and memory-optimized instances. As the number of leaderboards and the number of people stored in those leaderboards increases, we scale up instance types as needed.

When we first moved to AWS, we didn’t use Redis, the open source key-value cache, at all. Now we have multiple services running Redis on Amazon ElastiCache. When we decided to roll out a new video feature, we didn’t worry about optimization because we know that AWS would have an option available to us if the feature really took off. As you grow, Amazon has an affordable option for you that will keep your time focused on building the product and not playing system admin.

Scaling a Seasonal Business

On top of all of this, the sports that we track — skiing, snowboarding, and surfing — are very seasonal. We might process 100 videos an hour from users during the height of ski season, and then one a day after the snow has melted. At our peak we ingest about 100 GB of binary sensor data a day, whereas off season it might be 1/100th of that. As such we’re heavy users of Spot Instances and scale our EC2 instances up and down by time of year and time of season. President’s Day weekend, for example, is a huge time for skiing vacations, and with Spot Instances we can scale to meet this demand without wasting resources.

Conclusion

All in all, we use AWS because it gives us so much flexibility at such an affordable price. The time and money we save on not needing a system admin is spent on things like building new features, activating new marketing initiatives, and working closer with athletes. Our core competency lies in algorithms and products, not in our ability to be great system admins. With AWS, we can focus on our core competencies as much as possible.