AWS Compute Blog

Category: Artificial Intelligence

Generative AI Infrastructure at AWS

Building and training generative artificial intelligence (AI) models, as well as predicting and providing accurate and insightful outputs requires a significant amount of infrastructure. There’s a lot of data that goes into generating the high-quality synthetic text, images, and other media outputs that large-language models (LLMs), as well as foundational models (FMs), create. To start, […]

Connections and automatic groupings

Using generative infrastructure as code with Application Composer

This post is written by Anna Spysz, Frontend Engineer, AWS Application Composer AWS Application Composer launched in the AWS Management Console one year ago, and has now expanded to the VS Code IDE as part of the AWS Toolkit. This includes access to a generative AI partner that helps you write infrastructure as code (IaC) […]

The attendee’s guide to the AWS re:Invent 2023 Compute track

This post by Art Baudo – Principal Product Marketing Manager – AWS EC2, and Pranaya Anshu – Product Marketing Manager – AWS EC2 We are just a few weeks away from AWS re:Invent 2023, AWS’s biggest cloud computing event of the year. This event will be a great opportunity for you to meet other cloud […]

Building a serverless document chat with AWS Lambda and Amazon Bedrock

This post is written by Pascal Vogel, Solutions Architect, and Martin Sakowski, Senior Solutions Architect. Large language models (LLMs) are proving to be highly effective at solving general-purpose tasks such as text generation, analysis and summarization, translation, and much more. Because they are trained on large datasets, they can use a broad generalist knowledge base. […]

Architecture Diagram depicting the integration between AWS Systems Manager with RunCommand Arguments stored in SSM Parameter Store, your Amazon GPU enabled EC2 instance with installed Amazon CloudWatch Agen­t, and Amazon CloudWatch Dashboard that aggregates and displays the ­reported metrics.

Optimizing GPU utilization for AI/ML workloads on Amazon EC2

­­­­This blog post is written by Ben Minahan, DevOps Consultant, and Amir Sotoodeh, Machine Learning Engineer. Machine learning workloads can be costly, and artificial intelligence/machine learning (AI/ML) teams can have a difficult time tracking and maintaining efficient resource utilization. ML workloads often utilize GPUs extensively, so typical application performance metrics such as CPU, memory, and […]

CodeWhisperer full function generation

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

This blog post is written by Mark Richman, Senior Solutions Architect. Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor. Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in […]

SolutionOverview

Building a low-code speech “you know” counter using AWS Step Functions

This post is written by Doug Toppin, Software Development Engineer, and Kishore Dhamodaran, Solutions Architect. In public speaking, filler phrases can distract the audience and reduce the value and impact of what you are telling them. Reviewing recordings of presentations can be helpful to determine whether presenters are using filler phrases. Instead of manually reviewing […]

System architecture of the amazon ec2 dl1 instances.

Amazon EC2 DL1 instances Deep Dive

This post is written by Amr Ragab, Principal Solutions Architect, Amazon EC2. AWS is excited to announce that the new Amazon Elastic Compute Cloud (Amazon EC2) DL1 instances are now generally available in US-East (N. Virginia) and US-West (Oregon). DL1 provides up to 40% better price performance for training deep learning models as compared to […]

Step Functions workflow

Build workflows for Amazon Forecast with AWS Step Functions

This post shows how to create a Step Functions workflow for Forecast using AWS SDK service integrations, which allows you to use over 200 with AWS API actions. It shows two patterns for handling asynchronous tasks. The first pattern queries the describe-* API repeatedly and the second pattern uses the “Retry” option. This simplifies the development of workflows because in many cases they can replace Lambda functions.

Blurred faces output

Creating a serverless face blurring service for photos in Amazon S3

A serverless face blurring service can provide a simpler way to process photos in workloads with large amounts of traffic. This post introduces an example application that blurs faces when images are saved in an S3 bucket. The S3 PutObject event invokes a Lambda function that uses Amazon Rekognition to detect faces and GraphicsMagick to process the images.