AWS Compute Blog
Category: *Post Types
How to authenticate private container registries using AWS Batch
This post was contributed by Clayton Thomas, Solutions Architect, AWS WW Public Sector SLG Govtech. Many AWS Batch users choose to store and consume their AWS Batch job container images on AWS using Amazon Elastic Container Registries (ECR). AWS Batch and Amazon Elastic Container Service (ECS) natively support pulling from Amazon ECR without any extra […]
Read MoreHow to run massively multiplayer games with EC2 Spot using Aurora Serverless
This post is written by Yahav Biran, Principal Solutions Architect, and Pritam Pal, Sr. EC2 Spot Specialist SA Massively multiplayer online (MMO) game servers must dynamically scale their compute and storage to create a world-scale persistence simulation with millions of dynamic objects, such as complex AR/VR synthetic environments that match real-world fidelity. The Elastic Kubernetes […]
Read MorePython 3.9 runtime now available in AWS Lambda
You can now create new functions or upgrade existing Python functions to Python 3.9. Lambda’s support of the Python 3.9 runtime enables you to take advantage of improved performance and new features in this version. Additionally, the Lambda service now runs the __init_.py code before the handler, supports TLS 1.3, and provides enhanced logging for errors.
Read MoreUnderstanding Amazon Machine Images for Red Hat Enterprise Linux with Microsoft SQL Server
This post is written by Kumar Abhinav, Sr. Product Manager EC2, and David Duncan, Principal Solution Architect. Customers now have access to AWS license-included Amazon Machine Images (AMI) for hosting their SQL Server workloads with Red Hat Enterprise Linux (RHEL). With these AMIs, customers can easily build highly available, reliable, and performant Microsoft SQL Server […]
Read MoreHow to quickly setup an experimental environment to run containers on x86 and AWS Graviton2 based Amazon EC2 instances
This post is written by Kevin Jung, a Solution Architect with Global Accounts at Amazon Web Services. AWS Graviton2 processors are custom designed by AWS using 64-bit Arm Neoverse cores. AWS offers the AWS Graviton2 processor in five new instance types – M6g, T4g, C6g, R6g, and X2gd. These instances are 20% lower cost and […]
Read MoreOptimizing EC2 Workloads with Amazon CloudWatch
This post is written by David (Dudu) Twizer, Principal Solutions Architect, and Andy Ward, Senior AWS Solutions Architect – Microsoft Tech. In December 2020, AWS announced the availability of gp3, the next-generation General Purpose SSD volumes for Amazon Elastic Block Store (Amazon EBS), which allow customers to provision performance independent of storage capacity and provide […]
Read MoreDeveloping evolutionary architecture with AWS Lambda
This post shows how you can evolve a workload using hexagonal architecture. It explains how to add new functionality, change underlying infrastructure, or port the code base between different compute solutions. The main characteristics enabling this are loose coupling and strong encapsulation.
Read MoreUsing Amazon MQ for RabbitMQ as an event source for Lambda
Amazon MQ for RabbitMQ is an AWS managed version of RabbitMQ. The service manages the provisioning, setup, and maintenance of RabbitMQ, reducing operational overhead for companies. Now, with Amazon MQ for RabbitMQ as an event source for AWS Lambda, you can process messages from the service. This allows you to integrate Amazon MQ for RabbitMQ […]
Read MoreMonitoring and troubleshooting serverless data analytics applications
In this post, I show how the existing settings in the Alleycat application are not sufficient for handling the expected amount of traffic. I walk through the metrics visualizations for Kinesis Data Streams, Lambda, and DynamoDB to find which quotas should be increased.
Read MoreBuilding leaderboard functionality with serverless data analytics
In this post, I explain the all-time leaderboard logic in the Alleycat application. This is an asynchronous, eventually consistent process that checks batching of incoming records for new personal records. This uses Kinesis Data Firehose to provide a zero-administration way to deliver and process large batches of records continuously.
Read More