AWS Outposts rack delivers fully managed AWS infrastructure, native AWS services, APIs, and tools to virtually any customer in an on-premises facility. AWS Outposts rack enables applications that need to run on premises due to low latency, local data processing, or local data storage needs while removing the undifferentiated heavy lifting required to procure, manage, and upgrade on premises infrastructure.
Compute and storage
You can choose from a range of pre-validated Outposts rack configurations offering a mix of Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), and Amazon Simple Storage Service (S3) capacity designed to meet a variety of application and data residency needs. You can also contact AWS to create a customized configuration designed for your unique application needs.
AWS Outposts rack catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage.
General purpose (M5/M5d) instances provide a balance of compute, memory, and network resources and can be used for general-purpose workloads, web and application servers, backend servers for enterprise applications, gaming servers, and caching fleets.
Compute optimized (C5/C5d) instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. They are suited for compute intensive applications such as batch processing, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning (ML) inference.
Memory optimized (R5/R5d) instances are designed to deliver fast performance for workloads that process large data sets in memory. They are well suited for memory intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, and real-time big data analytics.
Graphics optimized (G4dn) are designed to help accelerate machine learning inference and graphics-intensive workloads. They can be used for machine learning inference for applications like adding metadata to an image, object detection, recommender systems, automated speech recognition, and language translation. They also provide a very cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.
I/O optimized (I3en) provides dense Non-Volatile Memory Express (NVMe) SSD instance storage optimized for low latency, high random I/O performance, high sequential disk throughput, and offers the lowest price per GB of SSD instance storage on Amazon EC2. It is well suited for NoSQL databases (Cassandra, MongoDB, Redis), in-memory databases (Aerospike), scale-out transactional databases, distributed file systems, data warehousing, Elasticsearch, and analytics workloads.
Support for EC2 VT1 instances and EC2 instances powered by Graviton processors such as C6g, M6g, and R6g is coming soon.
Amazon EBS: AWS Outposts rack offers local instance storage, and Elastic Block Store (EBS) gp2 volumes for persistent block storage. Just as in AWS Regions, you can use EBS gp2 volumes for boot or data volumes, and attach or detach EBS volumes to EC2 instances on your Outpost. It provides snapshot and restore capabilities and lets you increase volume size without any performance impact. All EBS volumes and snapshots on Outposts rack are fully encrypted by default. EBS is offered in tiers of 11 TB, 33 TB, and 55 TB*.
Amazon S3 on Outposts: S3 on Outposts delivers object storage to your on-premises AWS Outposts rack environment. Using the S3 APIs and features available in AWS Regions today, S3 on Outposts makes it easier to store and retrieve data on your Outpost, as well as secure the data, control access, tag, and report on it. S3 on Outposts enables you to store data on your Outpost, helping you meet local data residency requirements or satisfy low-latency needs by keeping data close to on-premises applications. S3 on Outposts provides a new Amazon S3 storage class, named ‘S3 Outposts’, which uses the S3 APIs, and is designed to durably and redundantly store data across multiple devices and servers on your Outpost. You can add 26 TB, 48 TB, 96 TB, 240 TB, or 380 TB of S3 storage capacity to your Outpost (the 26 TB S3 option is only supported on Outposts rack with 11 TB EBS configured). You can create up to 100 buckets per AWS account on each Outpost. To get started using S3 on Outposts, visit the AWS Outposts Management Console to order an Outposts rack configuration that includes S3 storage or to add S3 storage to an existing Outpost you can work with your account team.
Amazon EBS Snapshots: EBS Snapshots are a point-in-time copy of your EBS volumes. By default, snapshots of EBS volumes on your Outpost are stored on Amazon S3 in the Region. You can also use Amazon EBS Local Snapshots on Outposts to store snapshots of EBS volumes locally on your Outpost using Amazon S3 on Outposts. EBS Local Snapshots on Outposts require your Outpost to be provisioned with S3 on Outposts. They provide a simple and secure on premises data protection solution for EBS volumes on your Outpost. You can effectively meet your data residency requirements for EBS storage using resource-level IAM policies. You can also use EBS Local Snapshots on Outposts for disaster recovery and backup.
CloudEndure Migration: CloudEndure Migration, offered by AWS, allows customers to migrate workloads onto AWS Outposts rack from physical, virtual, or cloud-based sources. It simplifies and expedites migrating workloads from on-premises locations, AWS Regions, and other clouds to Outposts. Additionally, using EBS Local Snapshots on Outposts, you can migrate workloads from any source directly onto Outposts rack, or from one Outpost to another, without requiring the EBS snapshot data to go through the Region, resulting in lower latencies, greater performance, and reduced costs.
CloudEndure Disaster Recovery: CloudEndure Disaster Recovery, offered by AWS, offers scalable, cost-effective business continuity for physical, virtual, and cloud-based workloads onto AWS Outposts rack. Using CloudEndure Disaster Recovery you can replicate and recover from on-premises to Outposts rack, from AWS Regions onto Outposts rack, from Outposts rack into AWS Regions, and between two Outposts. Additionally, with EBS Local Snapshots on Outposts, you can replicate and recover workloads from any source directly onto Outposts rack without requiring the EBS snapshot data to go through an AWS Region, leading to lower latencies, greater performance, and reduced costs. CloudEndure Disaster Recovery improves resilience, enabling recovery point objectives (RPOs) of seconds and recovery time objectives (RTOs) of minutes.
You can seamlessly extend your existing Amazon VPC to your Outpost in your on-premises location. After installation, you can create a subnet in your regional VPC and associate it with an Outpost just as you associate subnets with an Availability Zone in an AWS Region. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC.
Each Outpost provides a new local gateway (LGW) that allows you to connect your Outpost resources with your on-premises networks. LGW enables low latency connectivity between the Outpost and any local data sources, end users, local machinery and equipment, or local databases.
You can provision an Application Load Balancer (ALB) to automatically distribute incoming HTTP(S) traffic across multiple targets on your Outposts rack, such as Amazon EC2 instances, containers, and IP addresses. ALB on Outposts is fully managed, operates in a single subnet, and scales automatically up to the capacity available on the Outposts rack to meet varying levels of application load without manual intervention.
With AWS Outposts Private Connectivity, you can establish a service link VPN connection from your Outpost to the AWS Region over AWS Direct Connect. Private Connectivity minimizes public internet exposure and removes the need for special firewall configurations.
Direct VPC routing vs. Customer-owned IP
Direct VPC routing for AWS Outposts allows on-premises environment to directly communicate with the Outpost using the private subnets configured in the VPC. In this mode, the LGW will automatically advertise all the VPC subnets to your on-premises network through BGP. Alternatively, you can use Customer-owned IP (CoIP) routing mode where the Outpost uses a separate IP address pool provided by you from your on-premises network. If you choose CoIP, the IP address pool is assigned to the local gateway and advertised back to your network through BGP. In this mode, the local gateway performs NAT for instances to the CoIP address when communicating to on-premises environment.
Intra-VPC communication across multiple Outposts
You can add routes in your Outposts rack subnet route table to forward traffic between subnets within the same VPC spanning across multiple Outposts using LGW. This enables intra-VPC instance-to-instance communication across Outposts through your on-premises network, via direct VPC routing. With intra-VPC communication across multiple Outposts, you can build Multi-AZ like architectures for your on-premises applications running on Outposts racks that are anchored to two different Availability Zones (AZs).
Amazon Route 53 Resolver on Outposts
Route 53 Resolver on Outposts allows you to resolve Domain Name Server (DNS) queries locally on an Outpost to enhance the availability and performance of on-premises applications. When you enable Route 53 Resolver on Outposts, Route 53 automatically stores DNS responses locally on the Outpost. Optionally, you can connect the Resolver on the Outpost with DNS servers in your on-premises data centers through Route 53 Resolver endpoints. Route 53 Resolver on Outposts provides continued DNS resolution for applications running on the Outpost during unexpected network disconnects to the parent AWS Region. It also enables low-latency DNS resolution by serving DNS responses locally.
AWS services on Outposts rack
You can run a variety of AWS services locally to build and run your applications on premises.
Amazon ECS: Run highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on Outposts rack. With ECS on Outposts you can run containerized applications that require low latencies to on premises systems. Amazon ECS running on Outposts rack eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines in your on premises environments. With simple API calls, you can launch and stop Docker-enabled applications and query the complete state of your application with the same ease as you manage containers in the AWS Regions today.
Amazon EKS: Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. You can use EKS on Outposts to run containerized applications that require particularly low latencies to on premises systems. With EKS on Outposts, you can manage containers on premises with the same ease as you manage your containers in AWS Regions.
Amazon RDS on Outposts: RDS on Outposts supports Microsoft SQL Server, MySQL, and PostgreSQL database engines, with support for additional database engines coming soon. Amazon Relational Database Service (RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS provides cost-efficient and resizable capacity while automating time-consuming administration tasks including infrastructure provisioning, database setup, patching, and backups, freeing you to focus on your applications. Amazon RDS on Outposts brings these same benefits to your on-premises Outposts rack deployments. You can run fully managed databases on premises for low latency workloads that need to be run in close proximity to on-premises data and applications. You can manage RDS databases both in the AWS Regions and on premises using the same AWS Management Console, APIs, and CLI. It also enables low-cost, high-availability hybrid deployments, with disaster recovery back to the AWS Region, read replica bursting to Amazon RDS in the AWS Region, and long-term archival in Amazon Simple Storage Service (Amazon S3) in the AWS Region.
Amazon ElastiCache on Outposts: ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached, optimized for real-time applications with sub-millisecond latency. Amazon ElastiCache on Outposts allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores on AWS Outposts rack capacities, as in the AWS Regions. You can build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache on Outposts enables real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing, when deployed for local-data processing and low-latency applications.
Amazon EMR: Amazon EMR clusters running on AWS Outposts rack in your data center, co-location space, or on-premises facility provide a truly consistent and seamless hybrid cloud analytics experience. You can deploy secure and managed EMR clusters in your data center in minutes. This gives business users the latest versions of Apache Spark, Apache Hive, and Presto to access critical on premises data sources and systems for big data analytics. When launching an EMR cluster into an Outpost, you can use the EMR console, SDK, or CLI to specify the subnet associated with your Outpost. Your EMR clusters run in the on-premises Outposts rack instance and appear in the EMR console like any other cluster.
Upgrading services running on Outposts rack
As new versions of AWS services become available in the AWS Region, AWS services running locally on Outposts rack will be upgraded automatically to the latest version just as in the AWS Region today. Services such as Amazon RDS on Outposts patch both OS and database engines within scheduled maintenance windows with minimum downtime.
Access regional services
AWS Outposts rack is an extension of the AWS Region. You can seamlessly extend your Amazon Virtual Private Cloud on premises and connect to a broad range of services available in the AWS Region. You can access all regional AWS services in your private VPC environment — for example, through Interface Endpoints, Gateway Endpoints, or their regional public endpoints.
You can access AWS tools running in the AWS Region such as AWS CloudFormation, Amazon CloudWatch, AWS CloudTrail, Elastic BeanStalk, Cloud 9, and others to run and manage applications on Outposts rack the same way as you do in the AWS Region today.
Security and compliance
Enhanced security with AWS Nitro
AWS Outposts rack builds on the AWS Nitro System technologies that enables AWS to provide enhanced security that continuously monitors, protects, and verifies your Outpost’s instance hardware and firmware. With AWS Nitro, virtualization resources are offloaded to dedicated hardware and software, minimizing the attack surface. Finally, Nitro System's security model is locked down and prohibits administrative access, eliminating the possibility of human error and tampering.
AWS Outposts rack has an updated shared responsibility model underlying security. AWS is responsible for protecting Outposts rack's infrastructure similar to how it secures infrastructure in the AWS Regions today. Customers are responsible for securing their applications running on Outposts rack as they do in the Region today. With Outposts rack, customers are also responsible for the physical security of their Outposts racks, and for ensuring consistent networking to the Outpost.
Data-at-rest: Data is encrypted at rest by default on EBS volumes, and S3 objects on Outposts rack.
Data-in-transit: Data is encrypted in transit between Outposts rack and the AWS Region, through the service link.
Deleting data: All data is deleted when instances are terminated in the same way as in the AWS Region.
Outposts rack is designed for high availability with redundant top of rack networking switches, power elements, and built-in, always active, additional capacity (if provisioned) to enable reliable auto recovery workflows the same way as in AWS Regions. Similar to AWS Auto Scaling in the AWS Regions today, we recommend best practices for high availability deployments and auto recovery workflows for easy failover in case of any underlying host issue. Customers can also deploy multiple Outposts at a site, each tied to a different Availability Zone for even higher availability. In addition, customers can use EC2 placement groups on AWS Outposts rack to ensure instances within a group are placed on distinct Outposts racks to reduce the impact of hardware failures.
AWS Resource Access Manager
AWS Outposts rack support for AWS Resource Access Manager (RAM) lets customers share access to Outposts rack resources – EC2 instances, EBS volumes, S3 capacity, subnets, and local gateways (LGWs) – across multiple accounts under the same AWS organization. This new capability allows distributed teams and business units in customer organizations to configure VPCs, launch and run instances, and create EBS volumes on the shared Outpost.
VMware Cloud on AWS Outposts
VMware Cloud on AWS Outposts is a jointly-engineered solution that brings the VMware Cloud on AWS experience to virtually any datacenter, co-location space, or on-premises facility. It runs VMware’s enterprise-class Software-Defined Data Center (SDDC) on dedicated AWS Nitro System-based EC2 bare metal Outposts rack instances.
VMware Cloud on AWS Outposts is optimized for VMware workloads that need to remain on premises to meet low latency, local data processing, or data residency requirements. It simplifies IT operations by delivering a fully managed service on premises and allows you to innovate faster with direct access to 200+ native AWS services over Elastic Network Interface (ENI) or VMware Transit Connect.