AWS Machine Learning Blog

Mapillary uses Amazon Rekognition to work towards building parking solutions for US cities

Mapillary is a collaborative street-level imagery platform that allows people and organizations to upload geo-tagged photos, which can then be used by customers to improve their mapping systems or applications. Mapillary uses Amazon Rekognition, a deep learning-based image and video analysis service, to enhance their metadata extraction. By using the DetectText operation from Amazon Rekognition, […]

Run SQL queries from your SageMaker notebooks using Amazon Athena

The volume, velocity and variety of data has been ever increasing since the advent of the internet. The problem many enterprises face is managing this “big data” and trying to make sense out of it to yield the most desirable outcome. Siloes in enterprises, continuous ingestion of data in numerous formats, and the ever-changing technology […]

Get started with automated metadata extraction using the AWS Media Analysis Solution

You can easily get started extracting meaningful metadata from your media files by using the Media Analysis Solution on AWS. The Media Analysis Solution provides AWS CloudFormation templates that you can use to start extracting meaningful metadata from your media files within minutes. With a web-based user interface, you can easily upload files and see the metadata that is automatically extracted. This solution uses Amazon Rekognition for facial recognition, Amazon Transcribe to create a transcript, and Amazon Comprehend to run sentiment analysis on the transcript. You can also upload your own images to an Amazon Rekognition collection and train the solution to recognize individuals. In this blog post, we’ll show you step-by step how to launch the solution and upload an image and video. You’ll be able to see firsthand how metadata is seamlessly extracted.

No code chatbots: TIBCO uses Amazon Lex to put chat interfaces into the hands of business users

Users today don’t expect to be tied to a desktop computer. They want to interact with systems on the go, in a variety of ways that are convenient to them. This means that people often turn to mobile devices and interact with applications and systems while multi-tasking. Users might not even touch their mobile device while operating the apps they use, particularly when they are in a vehicle, or when they are actively engaged in another activity. In home environments, this “hands-free” capability is facilitated by voice activation systems.

Business users now aspire to the same experience in a business environment. They want to operate the applications and systems that they use in their daily work tasks using voice control, just like they do at home. Imagine how much simpler daily work tasks would be. However, adding voice controls to systems has not been easy. Voice integration can be a very involved project, even for skilled developers. Moreover, today’s business users want to solve their own tactical and strategic business problems by building “low code/no code” apps. Plus, business users want these apps to follow the same end-user requirements we mentioned earlier: They need to be able to be used on the go anywhere, anytime, hands-free.

Visual search on AWS—Part 2: Deployment with AWS DeepLens

April 2023 Update: Starting January 31, 2024, you will no longer be able to access AWS DeepLens through the AWS management console, manage DeepLens devices, or access any projects you have created. To learn more, refer to these frequently asked questions about AWS DeepLens end of life. In Part 1 of this blog post series, we […]

Amazon SageMaker runtime now supports the CustomAttributes header

Amazon SageMaker now supports a new HTTP header for the InvokeEndpoint API action called CustomAttributes which can be used to provide additional information about an inference request or response. Amazon SageMaker strips all POST headers except those supported by the InvokeEndpoint API action and you can use the CustomAttributes header to pass custom information such […]

Visual search on AWS—Part 1: Engine implementation with Amazon SageMaker

In this two-part blog post series we explore how to implement visual search using Amazon SageMaker and AWS DeepLens. In Part 1, we’ll take a look at how visual search works, and use Amazon SageMaker to create a model for visual search. We’ll also use Amazon SageMaker to build a fast index containing reference items to be searched.

Access Amazon S3 data managed by AWS Glue Data Catalog from Amazon SageMaker notebooks

In this blog post, I’ll show you how to perform exploratory analysis on massive corporate data sets in Amazon SageMaker. From your Jupyter notebook running on Amazon SageMaker, you’ll identify and explore several corporate datasets in the corporate data lake that seem interesting to you. You’ll discover that each contains a subset of the information you need. You’ll join them to extract the interesting information, then continue analyzing and visualizing your data in your Amazon SageMaker notebook, in a seamless experience.

Pixm takes on phishing attacks with deep learning using Apache MXNet on AWS

Despite numerous cybersecurity efforts, phishing attacks are still on the rise. Phishing is a form of fraud where perpetrators pretend to be reputable companies and attempt to get individuals to reveal personal information, such as passwords and credit card numbers. It’s the most common social tactic.  93 percent of all breaches today start with phishing […]

Amazon Transcribe now supports multi-channel transcriptions

Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to applications. We’re excited to announce the availability of a new feature called channel identification, which allows users to process multi-channel audio files and retrieve a single transcript annotated with respective channel labels.