“Artificial Intelligence is resource-hungry by nature, and AWS gives Omilia the capacity we need to compete with the giants in our industry. For the price of a nice meal, we get the same scale infrastructure as the largest enterprises.”

 

Dimitris Vassos CEO and Chief Architect, Omilia

Omilia trains its machine learning models in hours instead of months with an elastic AWS infrastructure. The company builds conversational intelligence technology that lets enterprises automate customer service functions with natural language user interfaces. Omilia uses Amazon EC2 Spot Instances to create powerful clusters to train its deep learning models, and it delivers customer environments using AWS resources in Amazon Virtual Private Cloud.

  • About Omilia

    Omilia offers conversational intelligence technology to banks, insurance companies, and telecom providers across the world. These solutions are based on speech recognition, natural language understanding, and voice biometrics that can authenticate speakers over the phone. Founded in 2002 in Athens, Greece, it now has offices in six locations around the world.

     

  • AWS Services Used

  • Benefits of AWS

    • Can train machine learning models in hours instead of months
    • Faster time-to-market leads to new business opportunities
    • Pay-as-you-go infrastructure allows small company to compete with giants
    • Customers reassured by compliant hosting environment for sensitive data

     

Large enterprises such as Royal Bank of Canada, Vodafone rely on the Omilia conversational virtual assistant technology to provide personalized customer services and gain a competitive edge. Banks, telecoms, and insurance companies provide commoditized services that are often hard to differentiate, and there is a big focus on providing better customer experience. But with budgets shrinking all the time, they need efficient ways of delivering it.

Enter Omilia. Founded in 2002 in Athens, Greece, the company initially built solutions for customer contact centers partially based on third-party software, but in 2013, its agreement with one of its main software partners ended.

 

Omilia’s owners then decided to develop their own state-of-the-art technology based on deep neural networks that would go beyond the legacy products on the market.

Today, Omilia provides conversational intelligence solutions to large enterprises such as banks, insurance companies, and telecommunications providers around the world. These solutions are powered by speech recognition, natural language understanding, and voice biometrics that can authenticate speakers over the phone. 

The result: Omilia customers gain a virtual agent that can provide consumers with a natural and fluid experience while they resolve issues themselves by phone and through social media, online chat, SMS, and home devices.

This automation has led to a double-digit increase in containment, an average increase of 43 percent in call capacity across Omilia’s customer base, a 21 percent decrease in the costs of running contact centers, and a 50 percent reduction in handling times for customer inquiries.

But how has Omilia gotten this far? One of its major assets is the data it has from a long history of working with its client base. Researchers at the company have built deep learning models, which they train with millions of samples of real speech. The second key to the company’s success is Amazon Web Services (AWS). Storing and processing the data to train the machine learning models requires enormous compute capacity—something that was simply unaffordable for a bootstrap company of 15 people, as Omilia was in 2013.

Dimitris Vassos, chief executive officer and chief architect at Omilia, says, “It’s difficult to manage such large volumes of data without significant infrastructure behind it, like elastic storage that you can expand at the push of a button, or enough bandwidth to move terabytes of data between systems.” By using AWS, Omilia’s engineers can spin up clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances—each with as many as 3,000 processing cores (CPUs) and dozens of graphical processing units (GPUs)—and pay only for what they use. Once a model has been trained to a satisfactory degree of accuracy, Omilia can terminate the clusters.


Using AWS to create temporary compute clusters means that Omilia can train models in 24 hours instead of the few months (or more) it would take using on-premises infrastructure. Vassos gives an example of how this agility has helped the business grow. “We had pitched the business of a Spanish telco based on our old solution, but the customer was keen to see a pilot of our own software. The problem was that we didn’t have a model for the Spanish language. As soon as I left the meeting, I called our office in Athens, and by the time I got home the next day, we had the model up and running. It was literally next-day delivery, and the client was convinced to sign. That opportunity would have been lost if we hadn’t acted so quickly.”

Stories like this are becoming more common for Omilia, which now has customers in 11 countries and handles customer interactions in 17 languages. It has grown from one 15-person office in Athens to six offices across the globe with 85 employees who research, develop, and sell its products. “If we hadn’t had access to AWS, we simply wouldn’t have made the switch to developing our own conversational intelligence software,” says Vassos. “The initial investment—and the ongoing costs to run such an infrastructure—would have been too great.”

Omilia keeps costs low by using Amazon EC2 Spot Instances. These instances are priced competitively to take advantage of spare capacity in AWS data centers, but they can be terminated if the fluctuating market price goes above what a customer bids.

“We’ve architected our machine learning environment so that if we lose an instance we just spin up another one and the work continues uninterrupted,” says Vassos. Omilia also uses Amazon Simple Storage Service (Amazon S3) and Amazon Relational Database Service (Amazon RDS) to store the millions of audio files to train its models.

And while the bulk of its AWS usage (about 80–90 percent) is Amazon EC2 for training machine learning models, Omilia also uses the AWS Cloud to host its solutions for customers inside Amazon Virtual Private Cloud (Amazon VPC). “Because we process sensitive financial and personal data as part of the customer interactions we handle, our customers value the fact that AWS is PCI-DSS compliant,” says Vassos.

Vassos sums up the impact that AWS has had on Omilia: “Artificial intelligence is resource-hungry by nature, and AWS gives Omilia the capacity we need to compete with the giants in our industry. For the price of a nice meal, we get the same scale of infrastructure as the largest enterprises. That removes any handicaps we have as a smaller company and is the catalyst for our growth.”