AWS Machine Learning Blog

In the Research Spotlight: Hassan Sawaf

As AWS continues to support the Artificial Intelligence (AI) community with contributions to Apache MXNet and the release of Amazon Lex, Amazon Polly, and Amazon Rekognition managed services, we are also expanding our team of AI experts, who have one primary mission: To lower the barrier to AI for all AWS developers, making AI more accessible and easy to use. As Swami Sivasubramanian, VP of Machine Learning at AWS, succinctly stated, “We want to democratize AI.”

In our Research Spotlight series, I spend some time with these AI team members for in-depth conversations about their experiences and get a peek into what they’re working on at AWS.


Hassan Sawaf has been with Amazon since September 2016. This January, he joined AWS as Director of Applied Science and Artificial Intelligence.

Hassan has worked in the automatic speech recognition, computer vision, natural language understanding, and machine translation fields for 20+ years. In 1999, he cofounded AIXPLAIN AG, a company focusing on speech recognition and machine translation. His partners were, among others, Franz Josef Och, who eventually started the Google Translate team, and Stephan Kanthak, now Group Manager with Nuance Communications, and Stefan Ortmanns, today Senior Vice President, Mobile Engineering and Professional Services with Nuance Communications. Hassan also spent time at SAIC as Chief Scientist for Human Language Technology, where he worked on multilingual spoken dialogue systems. Coincidentally, his peer from Raytheon BBN Technologies was Rohit Prasad, who is now VP and Head Scientist for Amazon Alexa.

How did you get started?

“I started working in development on information systems in airports, believe it or not. Between airlines and airports, and from airport-to-airport, the communication used to be via Telex messages, using something similar to “shorthand” information about the plane. These messages included information such as Who has boarded the plane? What’s the cargo? How is the baggage distributed on the plane? How much fuel does it have? What kinds of passengers (first class, business class), etc. This kind of information was sent from airline to airport before the plane landed. But by the 1990’s, flight travel had grown exponentially. And it used to be that humans had to read this information and translate that into actions in the airport. So, we built the technology that could do this fully automatically, so that manual human intervention was no longer needed. People no longer needed to sit there reading Telex messages and typing ahead on the computer. We converted this such that the process was completely done by machine. This was my first project in natural language understanding.

“After that, I started doing speech recognition and machine translation in combination, so that people with different languages could communicate over the phone with each other. Again, in the mid-90’s, this was very complicated – it still is!  But more so at that time because hardware was not available and machine learning was just getting ready to be utilized. So, we developed a system, out of the University of Aachen in research, and started a company in 1999— taking with me some of the best research scientists and students, to commercialize a product for speech translation which we launched in 2002. One of the co-founders was Franz Och, who started and led the Google Translate team.”

In 2010, Hassan started at SAIC as Chief Scientist for a DARPA project doing dialogue systems – specifically working on speech translation projects, and projects that perfect communications with robots, such that these bots receive instructions and respond with inquiries to learn and perfect their actions.

After SAIC, Hassan joined eBay and established several AI teams, starting with a team that implemented machine translation – specifically for increasing cross-border trade revenue. Hassan later also managed computer vision, user behavior modeling, natural language understanding and dialogue modeling. While leading the AI team behind the eBay Chat Bot, he was instrumental in expanding the idea of “chatbot conversations” to include images.

Why did you join AWS?

Hassan explained that although eBay’s scope is large, it’s primarily focused on commerce.

“I was hired by Swami Sivasubramanian, VP of Machine Learning at AWS, to develop technology around human language for higher level services – e.g. working on the science behind Amazon Lex. eBay was very interesting for me, with a large scope, but at AWS, the scope is bigger, as it covers not just commerce – but everything: Building technologies that are available for anyone to use for any use case they have; Enabling our developers to come up with new ideas that they might have to utilize the technology, instead of building the tech from scratch again and again which is expensive and slows down the advancement of products and solutions. Customers can focus on their business ideas and their special competencies, while AWS takes care of the core capabilities. Developers can take advantage of this to come up with these new and innovative solutions. That’s very exciting for me –  I love new ideas. Specifically, I like to help new entrepreneurs start something, and AWS is exactly in that space.”

You can find Hassan in Palo Alto, CA, working on his passions in human language, machine translation, and computer vision, and the science behind Amazon Lex and other Amazon and AWS AI services. In his free time, Hassan enjoys hiking and learning to play the guitar. You might also see him out in Monterey, CA, on the track racing sports cars!


About the Author

Victoria Kouyoumjian is a Sr. Product Marketing Manager for the AWS AI portfolio of services which includes Amazon Lex, Amazon Polly, and Amazon Rekognition, as well the AWS marketing initiatives with Apache MXNet. She lives in Southern California on an avocado farm and can’t wait until AI can clone her.