Cloud storage is a simple and scalable way to store, access, and share data over the Internet. Cloud storage providers such as Amazon Web Services own and maintain the network-connected hardware and software, while you provision and use what you need via a web application. Using cloud storage eliminates the acquisition and management costs of buying and maintaining your own storage infrastructure, increases agility, provides global scale, and delivers "anywhere, anytime" access to data.
Storing data in the cloud lets IT departments transform three areas:
1. Total Cost of Ownership. With cloud storage, there is no hardware to purchase, storage to provision, or capital being used for "someday" scenarios. You can add or remove capacity on demand, quickly change performance and retention characteristics, and only pay for storage that you actually use. Less frequently accessed data can even be automatically moved to lower cost tiers in accordance with auditable rules, driving economies of scale.
2. Time to Deployment. When development teams are ready to execute, infrastructure should never slow them down. Cloud storage allows IT to quickly deliver the exact amount of storage needed, right when it's needed. This allows IT to focus on solving complex application problems instead of having to manage storage systems.
3. Information Management. Centralizing storage in the cloud creates a tremendous leverage point for new use cases. By using cloud storage lifecycle management policies, you can perform powerful information management tasks including automated tiering or locking down data in support of compliance requirements.
Ensuring your company's critical data is safe, secure, and available when needed is essential. There are several fundamental requirements when considering storing data in the cloud.
Durability. Data should be redundantly stored, ideally across multiple facilities and multiple devices in each facility. Natural disasters, human error, or mechanical faults should not result in data loss.
Availability. All data should be available when needed, but there is a difference between production data and archives. The ideal cloud storage will deliver the right balance of retrieval times and cost.
There are three types of cloud data storage, and each offers their own advantages and have their own use cases:
1. Object Storage - Applications developed in the cloud often take advantage of object storage's vast scalablity and metadata characteristics. Object storage solutions like Amazon Simple Storage Service (S3) are ideal for building modern applications from scratch that require scale and flexibility, and can also be used to import existing data stores for analytics, backup, or archive.
2. File Storage - Some applications need to access shared files and require a file system. This type of storage is often supported with a Network Attached Storage (NAS) server. File storage solutions like Amazon Elastic File System (EFS) are ideal for use cases like large content repositories, development environments, media stores, or user home directories.
3. Block Storage - Other enterprise applications like databases or ERP systems often require dedicated, low latency storage for each host. This is analagous to direct-attached storage (DAS) or a Storage Area Network (SAN). Block-based cloud storage solutions like Amazon Elastic Block Store (EBS) are provisioned with each virtual server and offer the ultra low latency required for high performance workloads.
Backup and recovery is a critical part of ensuring data is protected and accessible, but keeping up with increasing capacity requirements can be a constant challenge. Cloud storage brings low cost, high durability, and extreme scale to backup and recovery solutions. Embedded data management policies like Amazon S3 Object Lifecycle Management can automatically migrate data to lower-cost tiers based on frequency or timing settings, and archival vaults can be created to help comply with legal or regulatory requirements. These benefits allow for tremendous scale possibilities within industries such as financial services, healthcare, and media that produce high volumes of data with long-term retention needs.
White Paper: Backup, Archive and Restore Architectures in AWS
Learn more about Backup to the Cloud.
Software test and development environments often requires separate, independent, and duplicate storage environments to be built out, managed, and decommissioned. In addition to the time required, the up-front capital costs required can be extensive.
Some of the largest and most valuable companies in the world have created applications in record time by leveraging the flexibility, performance, and low cost of cloud storage. Even the simplest static websites can be improved for an amazingly low cost. Developers all over the world are turning to pay-as-you go storage options that remove management and scale headaches.
The availability, durability, and cost benefits of cloud storage can be very compelling to business owners, but traditional IT functional owners like storage, backup, networking, security, and compliance administrators may have concerns around the realities of transferring large amounts of data to the cloud. Cloud data migration services services such as AWS Import/Export Snowball can simplify migrating storage into the cloud by addressing high network costs, long transfer times, and security concerns.
Storing data in the cloud can raise concerns about regulation and compliance, especially if this data is already stored in compliant storage systems. Cloud data compliance controls like Amazon Glacier Vault Lock are designed to ensure that you can easily deploy and enforce compliance controls on individual data vaults via a lockable policy. You can specify controls such as Write Once Read Many (WORM) to lock the data from future edits. Using audit log products like AWS CloudTrail can help you ensure compliance and governance objectives for your cloud-based storage and archival systems are being met.
Traditional on-premises storage solutions can be inconsistent in their cost, performance, and scalability — especially over time. Big data projects demand large-scale, affordable, highly available, and secure storage pools that are commonly referred to as data lakes.
Data lakes built on object storage keep information in its native form, and include rich metadata that allows selective extraction and use for analysis. Cloud-based data lakes can sit at the center of all kinds data warehousing, processing, big data and analytical engines, such as Amazon Redshift, Amazon RDS, Amazon EMR and Amazon DynamoDB to help you accomplish your next project in less time with more relevance.