AWS Compute Blog

Category: Advanced (300)

Understanding and Remediating Cold Starts: An AWS Lambda Perspective

Cold starts are an important consideration when building applications on serverless platforms. In AWS Lambda, they refer to the initialization steps that occur when a function is invoked after a period of inactivity or during rapid scale-up. While typically brief and infrequent, cold starts can introduce additional latency, making it essential to understand them, especially […]

Improving network observability with new AWS Outposts racks network metrics

With AWS Outposts racks, you can extend AWS infrastructure, services, APIs, and tools to on-premises locations. Providing performant, stable, and resilient network connections to both the parent AWS Region as well as the local network is essential to maintaining uninterrupted service. The release of two new Amazon CloudWatch metrics, VifConnectionStatus and VifBgpSessionState, gives you greater visibility into the operational status of the Outpost network connections. In this post, we discuss how to use these metrics to quickly identify network disruptions, using additional data points that can help reduce time to resolution.

Implementing message prioritization with quorum queues on Amazon MQ for RabbitMQ

Quorum queues are now available on Amazon MQ for RabbitMQ from version 3.13. Quorum queues are a replicated First-In, First-Out (FIFO) queue type that uses the Raft consensus algorithm to maintain data consistency. Quorum queues on RabbitMQ version 3.13 lack one key feature compared to classic queues: message prioritization. However, RabbitMQ version 4.0 introduced support […]

Deploying external boot volumes with AWS Outposts

September 2025: AWS Outposts is now integrated with Dell PowerStore and HPE Alletra Storage MP B10000. The guidance in this blog post also applies to these storage systems. Read Announcing AWS Outposts third-party storage integration with Dell and HPE for more details. Building on our previous announcement, AWS Outposts third-party storage integration for data volumes, […]

Infrastructure as code translation for serverless using AI code assistants

Serverless applications commonly use infrastructure as code (IaC) frameworks to define and manage their cloud resources. Teams choose different IaC tools based on their skills, existing tooling, or compliance needs. As applications grow, the need to shift between IaC formats may arise to adopt new features or align with evolving standards. Developers are rapidly adopting AI-powered […]

Modernizing SOAP applications using Amazon API Gateway and AWS Lambda

This post demonstrates how you can modernize legacy SOAP applications using Amazon API Gateway and AWS Lambda to create bidirectional proxy architectures that enable integration between SOAP and REST systems without disrupting existing business operations. Many organizations today face the challenge of maintaining critical business systems that were built decades ago. These legacy applications power […]

Orchestrating document processing with AWS AppSync Events and Amazon Bedrock

Many organizations implement intelligent document processing pipelines in order to extract meaningful insights from an increasing volume of unstructured content (such as insurance claims, loan applications and more). Traditionally, these pipelines require significant engineering efforts, as the implementation often involves using several machine learning (ML) models and orchestrating complex workflows. As organizations integrate these pipelines […]

Introducing AWS Lambda native support for Avro and Protobuf formatted Apache Kafka events

AWS Lambda now provides native support for Apache Avro and Protocol Buffers (Protobuf) formatted events with Apache Kafka event source mapping (ESM) when using Provisioned Mode. The support allows you to validate your schema with popular schema registries. This allows you to use and filter the more efficient binary event formats and share data using […]

Running and optimizing small language models on-premises and at the edge

As you move your generative AI implementations from prototype to production, you may discover the need to run foundation models (FMs) on-premises or at the edge to address data residency, information security (InfoSec) policy, or low latency requirements. To address users’ data residency, latency, and InfoSec needs, this post provides guidance on deploying generative AI FMs into AWS Local Zones and AWS Outposts.