AWS Storage Blog
Tag: AWS Identity and Access Management (IAM)
Migrate to Amazon S3 account regional namespaces
Since its launch in 2006, Amazon S3 has used a global namespace where bucket names must be unique across all AWS accounts and AWS Regions. This design has served customers well at scale, but organizations managing multiple accounts and environments often encounter naming collisions. When a bucket is deleted, its name returns to the global […]
Building automated AWS Regional availability checks with Amazon S3
Every day, organizations expand into new markets, migrate critical workloads across geographies, and build systems that need to operate reliably in multiple locations. At the root of these efforts is a simple question: “What can I deploy, and where?” The answer shapes important architecture decisions, from which AWS Regions to expand into, to how you […]
Automatically decompress files in Amazon S3 using AWS Step Functions
Every day, AWS customers process millions of compressed files in Amazon S3, from small ZIP archives to multi-gigabyte datasets. While decompressing a single file is straightforward, processing thousands of files efficiently requires complex orchestration, error handling, and infrastructure management. Consider this scenario: Your organization receives over 10,000 compressed files daily from partners, ranging from 5 […]
Applying Amazon S3 Object Lock at scale for petabytes of existing data
Organizations with petabytes of data in the cloud need a way to apply immutable storage protections to data that’s already been stored—whether for regulatory compliance or cyber resilience. Although you can enable write-once-read-many (WORM) controls for newly created storage, applying these protections to existing enterprise data at scale requires a systematic approach. Regulated industries have […]
Advanced notice: Amazon S3 to disable the use of SSE-C encryption by default for all new buckets and select existing buckets in April 2026
Starting on April 6, 2026, we will be changing how server-side encryption with customer-provided keys (SSE-C) is enabled for Amazon S3 buckets. With this change, SSE-C will be disabled by default on all new S3 general purpose buckets. Furthermore, SSE-C will also be disabled for all existing buckets in Amazon Web Services (AWS) Accounts that […]
How to use Amazon S3 Multi-Region Access Points to streamline and reduce the cost of writing across AWS Regions
Large global organizations often struggle to efficiently manage data copies across different geographic regions when using distributed object storage services. Although several approaches exist for cross-region data writing, common solutions such as data replication or streaming can be costly and introduce latency issues. Many customers have core services deployed globally across multiple Amazon Web Services […]
Build intelligent ETL pipelines using AWS Model Context Protocol and Amazon Q
Data scientists and engineers spend hours writing complex data pipelines to extract, transform, and load (ETL) data from various sources into their data lakes for data integration and creating unified data models to build business insights. The process involves understanding the source and target systems, discovering schemas, mapping source and target, writing and testing ETL […]
Derive intelligent storage insights using S3 Metadata and Model Context Protocol (MCP)
Organizations face mounting challenges in managing and operationalizing their ever-growing data assets for machine learning and analytics workflows. When dealing with billions and trillions of objects, teams struggle to find what data they have and how to efficiently find specific datasets. Without proper data discovery and metadata management, teams spend valuable time searching for relevant […]
Accelerating Amazon S3 Batch Operations at scale with on-demand manifest generation
Modern enterprises routinely manage billions of objects across their cloud storage environments, needing efficient bulk operations for disaster recovery, compliance management, data transfer, and cost optimization. Performing these operations manually or through custom scripts becomes impractical at scale, often creating operational bottlenecks when time-sensitive actions are necessary. Organizations frequently need to identify and process specific […]
Efficiently verify Amazon S3 data at scale with compute checksum operation
Organizations across industries must regularly verify the integrity of their stored datasets to protect valuable information, satisfy compliance requirements, and preserve trust. Media and entertainment customers validate assets to make sure that content remains intact, financial institutions run integrity checks to meet regulatory obligations, and research institutions confirm the reproducibility of scientific results. These verifications […]








