AWS Compute Blog

AWS re:Invent

ICYMI: Serverless pre:Invent 2019

With Contributions from Chris Munns – Sr Manager – Developer Advocacy – AWS Serverless The last two weeks have been a frenzy of AWS service and feature launches, building up to AWS re:Invent 2019. As there has been a lot announced we thought we’d ship an ICYMI post summarizing the serverless service specific features that […]

Read More

Tracking the state of AWS Lambda functions

AWS Lambda functions often require resources from other AWS services in order to execute successfully, such as AWS Identity and Access Management (IAM) roles or Amazon Virtual Private Cloud (Amazon VPC) network interfaces. When you create or update a function, Lambda provisions the required resources on your behalf that enable your function to execute. In […]

Read More
Figure 5: Resource Automation using Serverless Scheduler - A deeper look A deeper dive in to Part 2, resource allcoation.

Decoupled Serverless Scheduler To Run HPC Applications At Scale on EC2

This post is written by Ludvig Nordstrom and Mark Duffield | on November 27, 2019 In this blog post, we dive in to a cloud native approach for running HPC applications at scale on EC2 Spot Instances, using a decoupled serverless scheduler. This architecture is ideal for many workloads in the HPC and EDA industries, and […]

Read More
SAR Verified Author badge

Serverless Application Repository introduces Verified Author badge

Since its launch in February 2018, the AWS Serverless Application Repository (SAR) has become a rich library of components and serverless applications for builders. SAR allows developers to share these applications privately within their own accounts, or publicly with a broader audience. Today, we are excited to announce that SAR authors can now apply for […]

Read More

A simpler deployment experience with AWS SAM CLI

The AWS Serverless Application Model (SAM) CLI provides developers with a local tool for managing serverless applications on AWS. The command line tool allows developers to initialize and configure applications, debug locally using IDEs like Visual Studio Code or JetBrains WebStorm, and deploy to the AWS Cloud. On November 25, we announced improvements to the […]

Read More
Asynchronous Function Execution Result

Introducing AWS Lambda Destinations

Today we’re announcing AWS Lambda Destinations for asynchronous invocations. This is a feature that provides visibility into Lambda function invocations and routes the execution results to AWS services, simplifying event-driven applications and reducing code complexity. Asynchronous invocations When a function is invoked asynchronously, Lambda sends the event to an internal queue. A separate process reads […]

Read More

Running Cost-effective queue workers with Amazon SQS and Amazon EC2 Spot Instances

This post is contributed by Ran Sheinberg | Sr. Solutions Architect, EC2 Spot & Chad Schmutzer | Principal Developer Advocate, EC2 Spot | Twitter: @schmutze Introduction Amazon Simple Queue Service (SQS) is used by customers to run decoupled workloads in the AWS Cloud as a best practice, in order to increase their applications’ resilience. You […]

Read More
Default asynchronous invocation retry logs

New AWS Lambda controls for stream processing and asynchronous invocations

Today AWS Lambda is introducing new controls for asynchronous and stream processing invocations. These new features allow you to customize responses to Lambda function errors and build more resilient event-driven and stream-processing applications. Stream processing function invocations When processing data from event sources such as Amazon Kinesis Data Streams, and Amazon DynamoDB Streams, Lambda reads […]

Read More
Configuring the Parallelization Factor from the AWS Lambda console.

New AWS Lambda scaling controls for Kinesis and DynamoDB event sources

AWS Lambda is introducing a new scaling parameter for Amazon Kinesis Data Streams and Amazon DynamoDB Streams event sources. Parallelization Factor can be set to increase concurrent Lambda invocations for each shard, which by default is 1. This allows for faster stream processing without the need to over-scale the number of shards, while still guaranteeing […]

Read More