Artificial Intelligence
Category: *Post Types
Best practices for load testing Amazon SageMaker real-time inference endpoints
Amazon SageMaker is a fully managed machine learning (ML) service. With SageMaker, data scientists and developers can quickly and easily build and train ML models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so […]
Best practices for creating Amazon Lex interaction models
Designing and building an intelligent conversational interface is very different than building a traditional application or website. These best practices for Amazon Lex interaction models will help you develop those new skills as you design and optimize your next bot.
Power recommendations and search using an IMDb knowledge graph – Part 3
This three-part series demonstrates how to use graph neural networks (GNNs) and Amazon Neptune to generate movie recommendations using the IMDb and Box Office Mojo Movies/TV/OTT licensable data package, which provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million […]
How Thomson Reuters delivers personalized content subscription plans at scale using Amazon Personalize
This post is co-written by Hesham Fahim from Thomson Reuters. Thomson Reuters (TR) is one of the world’s most trusted information organizations for businesses and professionals. It provides companies with the intelligence, technology, and human expertise they need to find trusted answers, enabling them to make better decisions more quickly. TR’s customers span across the […]
Connecting Amazon Redshift and RStudio on Amazon SageMaker
Last year, we announced the general availability of RStudio on Amazon SageMaker, the industry’s first fully managed RStudio Workbench integrated development environment (IDE) in the cloud. You can quickly launch the familiar RStudio IDE and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) […]
Power recommendations and search using an IMDb knowledge graph – Part 2
This three-part series demonstrates how to use graph neural networks (GNNs) and Amazon Neptune to generate movie recommendations using the IMDb and Box Office Mojo Movies/TV/OTT licensable data package, which provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million […]
Accelerate the investment process with AWS Low Code-No Code services
The last few years have seen a tremendous paradigm shift in how institutional asset managers source and integrate multiple data sources into their investment process. With frequent shifts in risk correlations, unexpected sources of volatility, and increasing competition from passive strategies, asset managers are employing a broader set of third-party data sources to gain a […]
Create Amazon SageMaker models using the PyTorch Model Zoo
Deploying high-quality, trained machine learning (ML) models to perform either batch or real-time inference is a critical piece of bringing value to customers. However, the ML experimentation process can be tedious—there are a lot of approaches requiring a significant amount of time to implement. That’s why pre-trained ML models like the ones provided in the PyTorch […]
How to evaluate the quality of the synthetic data – measuring from the perspective of fidelity, utility, and privacy
In an increasingly data-centric world, enterprises must focus on gathering both valuable physical information and generating the information that they need but can’t easily capture. Data access, regulation, and compliance are an increasing source of friction for innovation in analytics and artificial intelligence (AI). For highly regulated sectors such as Financial Services, Healthcare, Life Sciences, […]
Augment fraud transactions using synthetic data in Amazon SageMaker
Developing and training successful machine learning (ML) fraud models requires access to large amounts of high-quality data. Sourcing this data is challenging because available datasets are sometimes not large enough or sufficiently unbiased to usefully train the ML model and may require significant cost and time. Regulation and privacy requirements further prevent data use or […]









