AWS for M&E Blog

re:Invent releases and launches for M&E workloads

With more than 30 launches and announcements of new services and major features during re:Invent 2020, the following is a tailored list of those most relevant for media workloads. Watch the full Andy Jassy Keynote here for even more announcements and information.

Making media workloads faster and smarter

AWS Proton

AWS Proton is the first fully managed application deployment service for container and serverless applications. Platform teams can use Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates. Proton enables platform teams to give developers an easy way to deploy their code using containers and serverless technologies, using the management tools, governance, and visibility needed to ensure consistent standards and best practices. Learn more

AWS Local Zones

AWS Local Zones previews are now available in Boston, Houston, and Miami, with plans to launch 12 additional AWS Local Zones throughout 2021 in key metro areas in the United States including Atlanta, Chicago, and New York. Using these new AWS Local Zones, customers will now be able to deliver ultra-low latency applications to end-users in cities across the continental United States. Customers can use Local Zones and Wavelength Zones for content production in major metro areas, and AWS Outposts for on-set/on-venue hardware as needed. Learn more

AWS Wavelength Zone 

AWS Wavelength Zone is now available in Las Vegas in addition to the seven previously announced cities of Boston, San Francisco Bay Area, New York City, Washington DC, Atlanta, Dallas, and Miami. AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the latency to connect to an application from 5G connected devices. Application traffic can reach application servers running in Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the communications service providers’ datacenters at the edge of the 5G networks, without leaving the telco provider’s network. This reduces the extra network hops to the Internet that can result in latencies of 10s of milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G. Learn more

AWS Outposts 1U and 2U Servers

AWS Outposts 1U and 2U form factors are rack-mountable servers that provide local compute and networking services to edge locations that have limited space or smaller capacity requirements. Outposts servers are ideal for customers with low-latency or local data processing needs for on-premises locations, like retail stores, branch offices, healthcare provider locations, or factory floors. AWS will deliver Outposts servers directly to you, and you can either have your onsite personnel install them or have them installed by a preferred third-party contractor. After the Outposts servers are connected to your network, AWS will remotely provision compute and storage resources so you can start launching applications. AWS Outposts 1U and 2U form factors will be available in 2021. Learn more

Amazon SageMaker Pipelines

Amazon SageMaker Pipelines is the world’s first machine learning (ML) CI/CD service accessible to every developer and data scientist. SageMaker Pipelines brings CI/CD practices to ML reducing the months of coding required to manually stitch together different code packages to just a few hours. ML workflows are typically out of reach for all but the largest enterprises, because they are hard to build. SageMaker Pipelines takes care of all the heavy lifting involved with managing the dependencies between each step of the workflow and orchestrates them so you can scale to thousands of models in production and expand your use of machine learning across more lines of business. Learn more

Amazon SageMaker Data Wrangler

SageMaker Data Wrangler takes the tedium out of preparing training data by allowing data scientists and ML engineers to analyze and prepare data for machine learning applications from a single interface. Instead of requiring complex queries to collect data from different sources, SageMaker Data Wrangler connects to data sources with just a few clicks. Its ready-to-use visualization templates and built-in data transforms streamline the process of cleaning, verifying, and exploring data so you can produce accurate ML models without writing a single line of code. Once your training data is prepared, you can automate data preparation and, through integration with SageMaker Pipelines, add it as a step into your ML workflow. Learn more

Amazon SageMaker Feature Store

Amazon SageMaker Feature Store is a feature store for machine learning (ML) serving features in both real-time and in batch. Using SageMakerFeature Store, you can store, discover, and share features so you don’t need to recreate the same features for different ML applications saving months of development effort. Your ML models use inputs called “features” to make predictions.  Features need to be available in large batches for training and also in real-time to make fast predictions. The quality of your predictions is dependent on keeping features consistent, but requires months of coding and deep expertise to keep features consistent across training and development environments. Amazon SageMaker Feature Store provides a consistent set of features so you get the exact same features for training and inference, and you can easily share features across your organization which improves collaboration and eliminates rework. Learn more

Enhancements to AWS core services

Amazon CloudFront Origin Shield

Amazon CloudFront Origin Shield, a centralized caching layer that helps increase your cache hit ratio to reduce the load on your origin. Origin Shield also decreases your origin operating costs by collapsing requests across regions so as few as one request goes to your origin per object. You can also use Lambda@Edge with Origin Shield to enable advanced serverless logic like dynamic origin load balancing. Customers using Origin Shield for live streaming, image handling, or multi-CDN workloads have reported up to a 57% reduction in their origin’s load.

Amazon Elastic Container Service (ECS) Anywhere

Amazon Elastic Container Service (ECS) Anywhere is a capability in Amazon ECS that enables you to easily run and manage container-based applications on-premises, including on virtual machines (VMs), bare metal servers, and other customer-managed infrastructure. You are now able to use ECS on any compute infrastructure, whether in AWS regions, AWS Local Zones, AWS Wavelength, AWS Outposts, or in any on-premises environment, without installing or operating container orchestration software. Sign up here to receive updates about this upcoming feature

AWS Lambda Container Image Support 

AWS Lambda supports packaging and deploying functions as container images, making it easy for you to build Lambda based applications by using familiar container image tooling, workflows, and dependencies. Customers can create their container deployment images by starting with either AWS Lambda provided base images or by using one of their preferred community or private enterprise images. Learn more

Amazon EC2 Mac Instances

Mac instances enable customers to run on-demand macOS workloads in the cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. Customers who rely on the Xcode IDE for creating iPhone, iPad, Mac, Apple Watch,Apple TV, and Safari apps can now provision and access macOS environments within minutes with simple mouse clicks or API calls, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing.Amazon EC2 Mac instances are built on Mac mini computers, and offer customers a choice of both the macOS Mojave (10.14) and macOS Catalina (10.15) versions. Learn more

Amazon EC2 D3 and D3en Instances

Amazon EC2 D3 and D3en instances provide cost-effective, high capacity local storage-per-vCPU for massively-scaled storage workloads. D3en instances, enhanced storage and high-speed networking variants, provide 7.5x higher networking speed, 100% higher disk throughput, 7x more storage capacity (up to 336 TB), and 80% lower cost per-TB of storage compared to D2 instances. D3 instances are a great fit for dense storage workloads including big data and analytics, data warehousing, and high scale file systems. D3en instances are a great fit for dense and distributed workloads including high capacity data lakes, clustered file systems, and other multi-node storage systems with significant inter-node I/O. With D3 and D3en instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads. Learn more

Amazon EC2 Instances Powered by AWS Graviton2 Processors

The new general purpose (M6g), general purpose burstable (T4g), compute optimized (C6g), and memory optimized (R6g) Amazon EC2 instances deliver up to 40% improved price performance over comparable x86-based instances for a broad spectrum of workloads including application servers, open source databases, in-memory caches, microservices, gaming servers, electronic design automation, high-performance computing, and video encoding. These instances are powered by new AWS Graviton2 processors that deliver up to 7x performance, 4x the number of compute cores, 2x larger private caches per core, and 5x faster memory compared to the first-generation AWS Graviton Processors.  AWS Graviton2 processors provide 2x faster floating-point performance per core for scientific and high-performance computing workloads, custom hardware acceleration for compression workloads, fully encrypted DRAM memory, and optimized instructions for faster CPU-based machine learning inference. Learn more

Amazon EC2 G4ad instances

G4ad instances are powered by AMD Radeon Pro V520 GPUs, providing the best price performance for graphics intensive applications in the cloud. These instances offer up to 45% better price performance compared to G4dn instances, which were already the lowest cost instances in the cloud, for graphics applications such as remote graphics workstations, game streaming, and rendering that leverage industry-standard APIs such as OpenGL, DirectX, and Vulkan. They provide up to 4 AMD Radeon Pro V520 GPUs, 64 vCPUs, 25 Gbps networking, and 2.4 TB local NVMe-based SSD storage. Learn more

Amazon EC2 instances powered by Habana Accelerators

Amazon EC2 instances powered by Habana accelerators are a new type of EC2 instance specifically optimized for deep learning training workloads to deliver the lowest cost-to-train machine learning models in the cloud. Habana-based instances are ideal for deep learning training workloads of applications such as natural language processing, object detection and classification, recommendation engines and autonomous vehicle perception. Customers will be able to launch the new EC2 instances using AWS Deep Learning AMIs, or via Amazon EKS and ECS for containerized applications, and also have the ability to use these instances via Amazon Sagemaker. Learn more

Amazon EC2 M5 Instances

Amazon EC2 M5 Instances are the next generation of the Amazon EC2 General Purpose compute instances. M5 instances offer a balance of compute, memory, and networking resources for a broad range of workloads. This includes web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and app development environments. Additionally, M5d, M5dn, and M5ad instances have local storage, offering up to 3.6TB of NVMe-based SSDs. Learn more

Amazon EC2 R5 Instances

Amazon EC2 R5 instances are the next generation of memory optimized instances for the Amazon Elastic Compute Cloud. R5 instances are well suited for memory intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. Additionally, you can choose from a selection of instances that have options for local NVMe storage, EBS optimized storage (up to 60 Gbps), and networking (up to 100 Gbps). Learn more

Amazon S3 Strong Consistency

Amazon S3 now delivers strong read-after-write consistency automatically for all applications. Unlike other cloud providers, Amazon S3 delivers strong read-after-write consistency for any storage request, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost. After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected. Learn more

Amazon EBS Provisioned IOPS Volume

Provisioned IOPS volumes, backed by solid-state drives (SSDs), are the highest performance Elastic Block Store (EBS) storage volumes designed for yourcritical, IOPS-intensive and throughput-intensive workloads that require low latency. Learn more

io2 Block Express

For customers who have even higher performance requirement than currently supported by a single io2 volume today, we are previewing io2 volumes that run on EBS Block Express, the next generation of our block storage architecture. io2 Block Express volumes can be provisioned to deliver peak IOPS of 256,000. For these volume, any IOPS provisioned over 64,000 IOPS will be charged at a further 30% lower rate than the second tier ($0.032 per provisioned IOP-mo for IOPS over 64,000). This lowers the effective rate to $0.038 per provisioned IOPS on a volume provisioned with 256,000 IOPS. You can request access to io2 Block Express volume here.

Amazon EBS gp3 Volume

Amazon EBS gp3 volumes are the latest generation of general-purpose SSD-based EBS volumes that enable customers to provision performance independent of storage capacity, while providing up to 20% lower price per GB than existing gp2 volumes. With gp3 volumes, customers can scale IOPS (input/output operations per second) and throughput without needing to provision additional block storage capacity. This means customers only pay for the storage they need. Learn more

Stay tuned for even more announcements in weeks 2 and 3, and check out our curated guide to re:Invent for M&E attendees.