AWS News Blog

Autheos – At the Nexus of Marketing and E-Commerce

In today’s guest post, Leon Mergen, CTO of Autheos, reviews their company history and their move to AWS.

Jeff;


Adding video to a product page on an e-commerce site is perhaps the single most effective way to drive increased sales — studies have shown sales conversion rates can go up by more than two thirds. In addition, product video viewing data fills a gaping hole in a brand’s / supplier’s ability to assess the effectiveness of their online and offline marketing efforts at driving e-commerce sales. We had built an OK product video distribution platform… but we knew we couldn’t scale globally with the technology we were using. So, in September last year, we decided to transition to AWS, and, while doing so built an e-commerce marketing support tool for Brands which, judging by customer response, is a game changer. This is our story.

The Perils of Good Fortune
Autheos was founded in 2012 when the biggest Webshop in Holland and Belgium asked us to turn an existing piece of technology into a video hosting solution that would automatically find and insert product videos into their product sales pages.  A startup rarely finds itself in a better position to start, so we jumped right in and started coding.  Which was, in retrospect, a mistake for two reasons.

For one thing, we grew too fast.  When you have a great client that really wants your product, the natural reaction is to build it as fast as you can.  So, since there wasn’t a team in place, we (too) quickly on-boarded engineers and outsourced several components to remote development shops, which resulted in classic issues of communication problems and technical incompatibilities.

More importantly, however, since we already had an existing piece of technology, we didn’t take the time to think how we would build it if we were starting from scratch.  It seemed like it would be quicker to adapt it to the new requirements.  And kind of like a home-owner who opts for renovation instead of tear-down and rebuild, we had to make all sorts of compromises as a result.

However, thanks to many all-nighters we managed to meet the deadline and launch a platform that allowed brands such as Philips, LEGO, L’Oreal, and Bethesda to upload product videos (commercials, guides, reviews, and so forth) for free and tag them with a product code and language.

The webshops integrated a small piece of javascript code that enabled them to query our video database in real-time with a product code and language, display a custom button if a video was found, and pop up the right videos(s) for the product, in the desired language.

Click here to see an example video on Bol.com (the biggest webshop in Benelux); our video is behind the button.

The results: less work for the webshop (no more manual gathering of videos, decoding/encoding, hosting and matching them with the right products) and more sales. Our client convinced its Brands to start uploading their videos, and kickstarted our exponential growth. Soon we had so many Brands using our platform, and so many videos in our database, that nearly all major webshops in Benelux wanted to work with us as well (often pushed to do so by Brands, who didn’t want the hassle of interfacing / integrating with many different webshops).

This might sound great, but remember how we built the product in a rush with legacy code?  After three years of fire-fighting, interspersed with frequent moments of disbelief when we found out that certain features we wanted to offer were impossible due to limitations in our backend, we decided enough was enough… it was time to start over.

A New Beginning with AWS
Our key requirements were that we needed to seamlessly scale globally, log and process all of our data, and provide high performance access to our ever growing database of product videos. Besides this, we needed to make sure we could ship new features and products quickly without impacting wider operations. Oh, and we wanted to be up and running with the new platform in 6 months. As the de-facto standard for web applications, the choice of AWS was an easy one. However, we soon realized that it wasn’t just an easy decision, it was a really smart one too.

Elastic Transcoder was the main reason for us to decide to go with AWS. Before working with ET, we used a custom transcoding service that had been built by an outsourced company in Eastern Europe. As a result of hosting the service there on antiquated servers, the transcoding service suffered from lots of downtime, and caused many headaches. Elastic Transcoder allows us to forget about all these problems, and gives us stable transcoding service which we can scale on-demand.

When we moved our application servers to AWS, we also activated Amazon CloudFront. This was a no-brainer for us even though there are many other CDNs available, as CloudFront integrates unbelievably well within AWS. Essentially it just worked. With a few clicks we were able to build a transcoding pipeline that directly uploads its result to CloudFront. We make a single API call, and AWS takes care of the rest, including CDN hosting. It’s really that easy.

As we generate a huge number of log records every day, we had to make sure these were stored in a flexible and scalable environment. A regular PostgreSQL server would have worked, however, this would never have been cost-efficient at our scale. So we started running some prototypes with Amazon Redshift, the PostgreSQL compatible data warehousing solution by AWS. We set up Kinesis Firehose to stream data from our application servers to Amazon Redshift, writing it off in batches (in essence creating a full ETL process as a service), something that would have taken a major effort with a traditional webhost. Doing this outside of AWS would have taken months; with AWS we managed to set all of this up in three days.

Managing this data through data mining frameworks was the next big challenge, for which many solutions exist in market. However, Amazon has great solutions in an integrated platform that enabled us to test and implement rapidly. For batch processing we use Spark, provided by Amazon EMR. For temporary hooking into data streams – e.g. our monitoring systems – we use AWS Data Pipeline, which gives us access to the stream of data as it is generated by our application servers, comparable to what Apache Kafka would give you.

Everything we use is accessible through an SDK, which allows us to run integration tests effectively in an isolated environment. Instead of having to mock services, or setting up temporary services locally and in our CI environment, we use the AWS SDK to easily create and clean up AWS services. The flexibility and operational effectiveness this brings is incredible, as our whole production environment can be replicated in a programmable setup, in which we can simulate specific experiments. Furthermore, we catch many more problems by actually integrating all services in all automated tests, something you would otherwise only catch during manual testing / staging.

Through AWS CloudFormation and AWS CodeDeploy we seamlessly built our cloud using templates, and integrated this with our testing systems in order to support our Continuous Deployment setup. We could, of course, have used Chef or Puppet with traditional webhosts, but the key benefit in using the AWS services for this is that we have instant access to a comprehensive ecosystem of tools and features with which we can integrate (and de-integrate) as we go.

Unexpected Bounty
One month in, things were going so smoothly that we did something that we had never done before in the history of the company:  we expanded our goals during a project without pushing out the delivery date.  We always knew that we had data that could be really valuable for Brands, but since our previous infrastructure made it really difficult to access or work with this data, we had basically ignored it.  However, when we had just finished our migration to Redshift, one of our developers read an article about the powerful combination of Redshift and Periscope.  So we decided to prototype an e-commerce data analysis tool.

A smooth connection with our Redshift tables was made almost instantly, and we saw our 500+ million records visualized in a few graphs that the Periscope team prepared for us.  Jaws dropped and our product manager went ahead and built an MVP. A few weeks of SQL courses, IRC spamming and nagging the Periscope support team later, and we had an alpha product.

We have shown this to a dozen major Brands and the response has been all we could hope for… a classic case of the fabled product / market fit. And it would not have happened without AWS.

An example of the dashboard for one of our Founding Partners (a global game development company).

Jackpot
With a state of the art platform, promising new products, and the backend infrastructure to support global viral growth we finally had a company that could attract the attention of professional investors… and within a few weeks of making our new pitch we had closed our first outside investment round.

We’ve come a long way from working with a bare bones transcoding server, to building a scalable infrastructure and best-in-class products that are ready to take over the world!

Our very first transcoding server.

What’s Next?
Driving viral spread globally to increase network effects, we are signing up new Webshops and Brands at a tremendous pace.  We are putting the finishing touches on the first version of our ecommerce data analysis product for Brand marketers, and speccing out additional products and features for Brands and Webshops working with the Autheos Network.  And of course we are looking for amazing team members to help make this happen. If you would like to join us on the next stage of our journey, please look at our website for current openings — and yes, we are looking for DevOps engineers!

And lastly, since this is the Amazon Web Services blog, we can’t resist being cheeky and thus herewith take the opportunity to invite Mr. Bezos to sit down with us to see if we can become the global product video partner for Amazon.  One thing’s for sure: our infrastructure is the best!

— Leon Mergen, CTO – lmergen@autheos.com