AWS Startups Blog

How Auris Leveraged AWS SageMaker to Help Marketers with Machine Learning

Guest post by Giridhar Apparusu, Co-founder and CTO, GenY Labs

GenY Labs is a marketing technology startup that was founded in 2015. The Hyderabad-based company uses machine learning and artificial intelligence to help its global clientele glean insights from unstructured and structured marketing data.

GenY Labs’ flagship product Auris allows customers to perform market research, customer feedback analysis, online reputation management, and category research.

Genesis

Business leaders know the power of getting accurate insights from data; whether it is to drive business decisions or to improve operational efficiencies. Traditionally they have taken an inside-out approach by analyzing data lying within many enterprise systems, but that’s only one part of the story. There is a wealth of information lying outside the enterprise, which can give richer and meaningful perspectives about the brand, its products, and its perception overall.

Auris helps a CMO understand brand perception, for instance. It allows a deeper dissection of data beyond sentiment to understand the whys and the hows of the perception, or to develop a deeper understanding of the audience interacting with the brand, their demographics, and their psychographics.

It is challenging to get these answers with offline or traditional marketing channels. Brands need to engage with market research firms with feet-on-the-street models and surveys. Surveys take longer cycle times especially as action happens—say, a new product launch. In this digital age these cycle times can have a deep business impact. How does one recalibrate product launch activities in real-time based on early customer response, plan a new campaign, or realign campaigns with resonating messages?

Real-time data analysis is immensely useful to not just a CMO but to other business leaders as well. For instance, the research and product development department can know which products and features are appreciated, or what features users want in the next iteration of the product.

All this is possible only if data is captured in one repository, structured, and analyzed. However, capturing data from multiple sources is a challenge even in the digital world, though this challenge can be solved with the right tools and technologies.

To address these CMO challenges, Auris devised a four-step framework:

  1. Listen – This allows an organization to bring together data from just about anywhere, in any language, on to one platform. Data is sourced from social media, news, blogs, review boards, media platforms, and analytics platforms into a single repository.
  2. Enrich – Since heterogeneous data brings in a variety of characteristics, it needs to be scrubbed, curated, and standardized. Once the data is standardized, a variety of enrichments are performed on raw data using heuristics and machine learning, to enable structured analysis.
  3. Action – This can be to counter negative reviews, or formulate a PR strategy, or integrate with any enterprise workflow. Actions can also be taken to derive new product ideas, new campaign ideas, and to feed sales intelligence.
  4. Analytics – Market research, demographic research, psychographic research, all measure and monitor the effectiveness of everything that is captured and actioned from the platform.

Today there are many tools that solve different bits of this problem, but no single tool does all this holistically. GenY Labs saw an opportunity for a SaaS platform that can do all of this under one roof and thus Auris came into existence, to develop its own tools based on machine learning algorithms.

Tackling the challenges

We faced some problems when we began building these tools as we had specific infrastructure requirements.

When we started GenY Labs a couple of years ago, infrastructure was one of the biggest challenges for us. It wasn’t easy to find machine learning-tuned hardware. Even if we did identify an infrastructure service provider, the challenge was the ability to change the algorithm on their platform.

With ambition and fire in the belly, our team at GenY Labs set out to solve this problem.

First, we tried building data ingestion services to pull data from just about any channel. We used an Nginx reverse proxy with Flask/RabbitMQ/Celery task workers behind. It seemed to work well during the prototyping stages, but as soon as data ingestion velocity increased, our team had to deploy more hardware behind Celery workers.

That did not help, as RabbitMQ gives the best performance when it operates at 0 queue length. However, ingestion velocity grew at a rate where the queue started constantly bursting at 10,000 entries.

Why RabbitMQ or Celery or Flask? Well, it turns out that data scientists used Scikit as their choice of platform to build their models. Since it was primarily text classification – sentiment, root cause identification, named entity recognition – supervised learning on large data sets was done using Naïve Bayes, Maximum Entropy, and SVM models.

We needed a tech stack on Python and Flask was lightweight. So we wrapped our data ingestion behind Nginx/Flask naturally. However, we could not process the data on web servers because data had to be scrubbed, curated, and standardized. We had to offload it elsewhere because we wanted real-time data enrichment. We chose RabbitMQ as message broker with Celery task runners to pull tasks and run machine learning models at real-time velocity.

Then the GenY Labs team faced another challenge: the problem of caching the machine learning models and predicting at near real-time. The workaround was to use Redis to hold the trained models and run predictions at near real-time. We began using the prototyped platform, adding more data sources to see how the system would scale with growth. Unfortunately, we could not scale this homegrown architecture.

We did not want to solve the infrastructure and scaling problems, and we just ended up doing that. It was an inflection point to research products/services that allowed us to bring our energies back to the core problem.

Of course, a platform is required for everything we just described. GenY Labs began with a prominent cloud service provider, but later switched to the Amazon Web Services platform and began using Amazon SageMaker machine learning models and algorithms.

First tryst with SageMaker

We zeroed in on AWS for our infrastructure services because we did not want to deal with multiple providers at this stage, and it would also be easier to understand the unit economics of pricing if everything was under the same umbrella.

After choosing the AWS platform, GenY Labs replaced Nginx/Flask with API Gateway; RabbitMQ with Kinesis, and Celery with Lambda. However, machine learning remained a bottleneck.

At that time, AWS had a black-boxed machine learning service that enabled us to upload training data, and it could pick the best model and generate a prediction end point. That was not working for us as we had our custom tuning to derive better results.

So the team began to look at black box models from other service providers. It evaluated providers such as Domino Data Labs, but eventually selected a well-known cloud service provider that seemed to give us the ability to customize to a large degree, and do away with the infrastructure scaling challenge.

Looking for better results

GenY Labs was content with this solution for some time. We were satisfied with the data ingestion velocity, as the system started to stabilize, and enrichments were working well. However, we then ran into some unexpected challenges.

  1. The accuracy of models was not to our satisfaction as we had limited headroom to maneuver text pre-processing on that cloud platform.
  2. Code management became a nightmare as we had to replicate common libraries behind every end point.
  3. Each end point has a concurrent limit of 200 prediction requests; we could provision more end points, but every time we did that, we needed to change the code to throw predictions to newer end points. We were doing load balancing between multiple end points.
  4. We were running ‘n’ number of times to the cloud service for ‘n’ predictions. Also, we had a Lambda timeout limit within which we had to accomplish this.
  5. Increased latency in Lambda meant we ended up paying for nothing because Lambda was waiting for results, and multiple round trips to the cloud service became a costly affair.

So GenY Labs was back to scouting for another solution. And the timing was right as AWS launched SageMaker. GenY Labs chose SageMaker because it offered custom models and gave better results. We were impressed with the infrastructure scalability and the level of customization that the service provided.

Why SageMaker

This time the AWS team went all out to support GenY Labs with its use of SageMaker, including tech support to help us shift and lift the models from the previous cloud platform.

The development team at GenY found that it could just pour its old Flask code ‘as-is’ with its own custom-tuned models into SageMaker. Code management suddenly seemed more natural. We then got ambitious and wanted to bundle all models together. As a prototype, we built one end point for two models and achieved the desired result.

We built all our models with confidence, and pushed them behind a single SageMaker end point. Round-tripping was reduced to just one trip; Lambda’s average time went down; unit economics looked healthy. More importantly, we could put this one end point in auto-scaling mode and grow without having to change a single line of code. We could even update the same end point after retraining.

That is not all – we also started embarking on deep learning based models using LSTM (long short term memory), transfer learning, and reinforced learning to improve our overall accuracy of the enrichment. Suddenly, given our hunger to solve problems, the sky seemed to be the limit.

All in all, this was just what GenY Labs wanted. It was now finally ‘eyes off infrastructure scaling’ and back to solving the CMO’s problem.

The journey has just begun…