Riversoft Reduces Inferencing Time for Smart Travel System by 75% with Generative AI on AWS
By modernizing its infrastructure with Amazon Bedrock and Amazon SageMaker, Riversoft reduces inferencing and response times for queries, improves model accuracy, and enhances its Smart Gateway travel matching system.
Benefits
75%
reduction in inferencing time, from 20 to 5 seconds20%
lower costs for hosting LLMs10
minutes for automated product matching, previously 4 hours90%
higher accuracy to boost customer satisfactionOverview
Taiwan-based Riversoft sought to enhance the performance of its AI-powered Smart Gateway travel matching engine. The company, which deployed its solution on Amazon Web Services (AWS), uses Claude 3 models on Amazon Bedrock for response development and BERT models on Amazon SageMaker for named entity recognition.
By utilizing diverse models on AWS, Riversoft has reduced inferencing time, improved accuracy, and cut operational costs.
About Riversoft
individual needs. Launched in 2024, its Smart Gateway SaaS connects a vast range of travel experiences in Japan with the millions of tourists visiting from Taiwan.

Inferencing time has been reduced from 20 seconds to just 5, allowing for rapid experimentation. We've also seen a 20 percent reduction in inferencing costs by using Claude models on Amazon Bedrock.
Claude Shen
CTO at RiversoftAWS Services Used
Get Started
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages