AWS Official Blog

Now Available: Improved Training Course for AWS Developers

by Jeff Barr | on | in Training and Certification | | Comments

My colleague Mike Stroh is part of our sales training team. He wrote the guest post below to introduce you to our newest AWS training courses.

Jeff;

We routinely tweak our 3-day AWS technical training courses to keep pace with AWS platform updates, incorporate learner feedback, and the latest best practices.

Today I want to tell you about some exciting enhancements to Developing on AWS. Whether you’re moving applications to AWS or developing specifically for the cloud, this course can show you how to use the AWS SDK to create secure, scalable cloud applications that tap the full power of the platform.

What’s New
We’ve made a number of updates to the course—most stem directly from the experiences and suggestions of developers who took previous versions of the course. Here are some highlights of what’s new:

  • Additional Programming Language Support – The course’s 8 practice labs now support Java, .Net, Python, JavaScript (for Node.js and browser)—plus the Windows and Linux operating systems.
  • Balance of Concepts and Code – The updated course expands coverage of key concepts, best practices, and troubleshooting tips for AWS services to help students build a mental model before diving into code. Students then use an AWS SDK to develop apps that apply these concepts in hands-on labs.
  • AWS SDK Labs – Practice labs are designed to emphasize the AWS SDK, reflecting how developers actually work and create solutions. Lab environments now include EC2 instances preloaded with all required programming language SDKs, developer tools, and IDEs. Students can simply log in and start learning!
  • Relevant to More Developers – The additional programming language support helps make the course more useful to both startup and enterprise developers.
  • Expanded Coverage of Developer-Oriented AWS Services – The updated course put more focus on the AWS services relevant to application development. So there’s expanded coverage of Amazon DynamoDB, plus new content on AWS Lambda, Amazon Cognito, Amazon Kinesis Streams, Amazon ElastiCache, AWS CloudFormation, and others.

Here’s a map that will help you to understand how the course flows from topic to topic:

How to Enroll
For full course details, look over the Developing on AWS syllabus, then find a class near you. To see more AWS technical courses, visit AWS Training & Certification.

Mike Stroh, Content & Community Manager

Amazon WorkSpaces Update – Support for Audio-In, High DPI Devices, and Saved Registrations

by Jeff Barr | on | in Amazon WorkSpaces | | Comments

Regular readers of this blog will know that I am a huge fan of Amazon WorkSpaces. In fact, after checking my calendar, I verified that every blog post I have written in the last 10 months has been done from within my WorkSpace. Regardless of my location—office, home, or hotel room—performance, availability, and functionality have all been excellent. Until you have experienced a persistent, cloud-based desktop for yourself you won’t know what you are missing!

Today, I am pleased to be able to tell you about three new features for WorkSpaces, each designed to make the service even more useful:

  • Audio-In – You can now make and receive calls from your WorkSpace using popular communication tools such as Lync, Skype, and WebEx.
  • High DPI Device Support – You can now take advantage of High DPI displays found on devices like the Surface Pro 4 tablet and the Lenovo Yoga laptop.
  • Saved Registration Codes – You can now save multiple registration codes in the same client application.

Audio-In
Being able to make and to receive calls from your desktop can boost your productivity. Using the newest WorkSpaces clients for Windows and Mac, you can make and receive calls using popular communication tools like Lync, Skype, and WebEx. Simply connect an analog or USB audio headset to your local client device and start making calls! This functionality is enabled for all newly launched WorkSpaces; existing WorkSpaces may need a restart. With the launch of this feature, voice communication with headsets is available to you at no additional charge in all regions where WorkSpaces are available today.

When a WorkSpace is created using a custom image, the audio-in updates are applied during the provisioning process and will take some time. To avoid this, you (or your WorkSpaces administrator) can create a new custom image after the updates have been applied to an existing WorkSpace.

High DPI Devices
To support the increasing popularity of high DPI (Full HD, Ultra HD, and QHD+) displays, we added the ability to automatically scale the in-session experience of WorkSpaces to match your local DPI settings. This means that fonts and icon sizes will match your preferred settings on high DPI devices making the WorkSpaces experience more natural. Simply use the newest WorkSpaces clients for Windows and Mac and enjoy this enhancement immediately.

Saved Registration Codes
Many customers access multiple WorkSpaces spread across several directories and/or regions and would prefer not to have to copy and paste registration codes to make the switch. You can now save up to 10 registration codes within the client application, and switch between them with a couple of clicks. You can control all of this through the new Manage Registrations screen:

To learn more about Amazon WorkSpaces, visit the Amazon WorkSpaces page.

Jeff;

New AWS Enterprise Accelerator – Standardized Architecture for NIST 800-53 on the AWS Cloud

by Jeff Barr | on | in Quick Start, Security | | Comments

In the early days of AWS, customers were happy to simply learn about the cloud and its benefits. As they started to learn more, the conversation shifted. It went from “what is the cloud” to “what kinds of security does the cloud offer” to “”how can I use the cloud” over the course of just 6 or 7 years. As the industry begins to mature, enterprise and government customers are now interested in putting the cloud to use in a form that complies with applicable standards and recommendations.

For example, National Institute of Standards and Technology (NIST) Special Publication 800-53 (Security and Privacy Controls for Federal Information Systems and Organizations) defines a set of information and security controls that are designed to make systems more resilient to many different types of threats. This document is accompanied by a set of certifications, accreditations, and compliance processes.

New Compliance Offerings
In order to simplify the task of building a system that is in accord with compliance standards of this type, we will be publishing a series of AWS Enterprise Accelerator – Compliance Quick Starts. These documents and CloudFormation templates are designed to help Managed Service Organizations, cloud provisioning teams, developers, integrators, and information system security officers.

The new AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST 800-53 on the AWS Cloud is our first offering in this series!

The accelerator contains a set of nested CloudFormation templates. Deploying the top-level template takes about 30 minutes and creates all of the necessary AWS resources. The resources include three Virtual Private Clouds (VPCs)—Management, Development, and Production—suitable for running a multi-tier Linux-based application.

The template also creates the necessary IAM roles and custom policies, VPC security groups, and the like. It launches EC2 instances and sets up an encrypted, Multi-AZ MySQL database (using Amazon Relational Database Service (RDS)) in the Development and Production VPCs.

The architecture defined by this template makes use AWS best practices for security and availability including the use of a Multi-AZ architecture, isolation of instances between public and private subnets, monitoring & logging, database backup, and encryption.

You also have direct access to the templates. You can download them, customize them, and extract interesting elements for use in other projects.

You can also add the templates for this Quick Start to the AWS Service Catalog as portfolios or as products. This will allow you to institute a centrally managed model, and will help you to support consistent governance, security, and compliance.

Jeff;

 

AWS Podcast – Bob Rogers (Intel Big Data)

by Jeff Barr | on | in AWS Podcast | | Comments

For Episode 134 of the AWS podcast, I spoke with Bob Rogers, PhD, Chief Data Scientist for Big Data Solutions at Intel Corporation. We talked about how Bob entered the field of data science, how to get value from data science projects, and some misconceptions around big data. You can listen to the podcast to learn what skills are needed to have a career as a data scientist, and you can also hear Bob’s tips for those looking to become one. Hear what Intel is doing in the big data and analytics space – from the silicon chip to the cloud, and what big data holds for the future.

That’s the last of my 2015 recordings. We’ll be back with more episodes soon. Thanks for listening!

Jeff;

 

PS – Intel asked that we add the following disclaimer:

  • Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.
  • No computer system can be absolutely secure.
  • Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K.
  • Intel and the Intel logo are trademarks of Intel Corporation in the United States and/or other countries.

EMR 4.3.0 – New & Updated Applications + Command Line Export

by Jeff Barr | on | in Amazon EMR | | Comments

My colleague Jon Fritz wrote the blog post below to introduce you to some new features of Amazon EMR.

— Jeff;


Today we are announcing Amazon EMR release 4.3.0, which adds support for Apache Hadoop 2.7.1, Apache Spark 1.6.0, Ganglia 3.7.2, and a new sandbox release for Presto (0.130). We have also enhanced our maximizeResourceAllocation setting for Spark and added an AWS CLI Export feature to generate a create-cluster command from the Cluster Details page in the AWS Management Console.

New Applications in Release 4.3.0
Amazon EMR provides an easy way to install and configure distributed big data applications in the Hadoop and Spark ecosystems on managed clusters of Amazon EC2 instances. You can create Amazon EMR clusters from the Amazon EMR Create Cluster Page in the AWS Management Console, AWS Command Line Interface (CLI), or using a SDK with an EMR API. In the latest release, we added support for several new versions of the following applications:

  • Spark 1.6.0 – Spark 1.6.0 was released on January 4th by the Apache Foundation, and we’re excited to include it in Amazon EMR within four weeks of open source GA. This release includes several new features like compile-time type safety using the Dataset API (SPARK-9999), machine learning pipeline persistence using the Spark ML Pipeline API (SPARK-6725), a variety of new machine learning algorithms in Spark ML, and automatic memory management between execution and cache memory in executors (SPARK-10000). View the release notes or learn more about Spark on Amazon EMR.
  • Presto 0.130 – Presto is an open-source, distributed SQL query engine designed for low-latency queries on large datasets in Amazon S3 and HDFS. This is a minor version release, with optimizations to SQL operations and support for S3 server-side and client-side encryption in the PrestoS3Filesystem. View the release notes or learn more about Presto on Amazon EMR.
  • Hadoop 2.7.1 – This release includes improvements to and bug fixes in YARN, HDFS, and MapReduce. Highlights include enhancements to FileOutputCommitter to increase performance of MapReduce jobs with many output files (MAPREDUCE-4814) and adding support in HDFS for truncate (HDFS-3107) and files with variable-length blocks (HDFS-3689). View the release notes or learn more about Amazon EMR.
  • Ganglia 3.7.2 – This release includes new features such as building custom dashboards using Ganglia Views, setting events, and creating new aggregate graphs of metrics. Learn more about Ganglia on Amazon EMR.

Enhancements to the maximizeResourceAllocation Setting for Spark
Currently, Spark on your Amazon EMR cluster uses the Apache defaults for Spark executor settings, which are 2 executors with 1 core and 1GB of RAM each. Amazon EMR provides two easy ways to instruct Spark to utilize more resources across your cluster. First, you can enable dynamic allocation of executors, which allows YARN to programmatically scale the number of executors used by each Spark application, and adjust the number of cores and RAM per executor in your Spark configuration. Second, you can specify maximizeResourceAllocation, which automatically sets the executor size to consume all of the resources YARN allocates on a node and the number of executors to the number of nodes in your cluster (at creation time). These settings create a way for a single Spark application to consume all of the available resources on a cluster. In release 4.3.0, we have enhanced this setting by automatically increasing the Apache defaults for driver program memory based on the number of nodes and node types in your cluster (more information about configuring Spark).

AWS CLI Export in the EMR Console
You can now generate an EMR create-cluster command representative of an existing cluster with a 4.x release using the AWS CLI Export option on the Cluster Details page in the AWS Management Console. This allows you to quickly create a cluster using the Create Cluster experience in the console, and easily generate the AWS CLI script to recreate that cluster from the AWS CLI.

 

Launch an Amazon EMR Cluster with Release 4.3.0 Today
To create an Amazon EMR cluster with 4.3.0, select release 4.3.0 on the Create Cluster page in the AWS Management Console, or use the release label emr-4.3.0 when creating your cluster from the AWS CLI or using a SDK with the EMR API.

Jon Fritz, Senior Product Manager, Amazon EMR

 

AWS Marketplace – Support for the Asia Pacific (Seoul) Region

by Jeff Barr | on | in AWS Marketplace | | Comments

Early in my career I worked for several companies that developed and shipped (on actual tapes) packaged software. Back in those pre-Internet days, marketing, sales, and distribution were all done on a country-by-country basis. This often involved setting up a field office and hiring local staff, both of which were expensive, time-consuming, and somewhat speculative. Providing prospective customers with time-limited access to trial copies was also difficult for many reasons including hardware and software compatibility, procurement & licensing challenges, and all of the issues that would inevitably arise during installation and configuration.

Today, the situation is a lot different. Marketing, sales, and distribution are all a lot simpler and more efficient, thanks to the Internet. For example, AWS Marketplace has streamlined the procurement process. With ready access to a very wide variety of commercial and open source software products from ISVs, customers can find what they want, buy it, and deploy it to AWS in minutes, with just a few clicks. Because many of the products in AWS Marketplace include a free trial and/or an hourly pricing option, potential large-scale users can take the products for a spin and make sure that they will satisfy their needs.

Support for the Asia Pacific (Seoul) Region
Now that the new Asia Pacific (Seoul) Region is up and running, customers located in Korea, as well as global companies serving Korean end users, can take advantage of the AWS Marketplace. There are now more than 600 products available for 1-click deploy in categories such as Network Infrastructure, Security, Storage, and Business Intelligence.

These products are available under several different pricing plans including free, hourly, monthly, and annual. For companies that already own applicable licenses for the desired products, a BYOL (Bring Your Own License) option is also available.

As I write this, more than 150 products are available for free trials in the Asia Pacific (Seoul) Region!

Several Korean ISVs have already listed their products on AWS Marketplace. Here’s a sampling:

ISV Opportunities
If you are a software vendor or developer and would like to list your products in AWS Marketplace, please take a look at the Sell on AWS Marketplace information. Customers will be able to launch your products in minutes and pay for it as part of the regular AWS billing system. As a vendor of products that are available in AWS Marketplace, you will be able to discover new customers and benefit from a shorter sales cycle.

Jeff;

 

AWS Week in Review – January 18, 2016

by Jeff Barr | on | in Week in Review | | Comments

Let’s take a quick look at what happened in AWS-land last week:

Monday

January 18

Tuesday

January 19

Wednesday

January 20

Thursday

January 21

Friday

January 22

Saturday

January 23

New & Notable Open Source

  • lambda-ec2-switch-timer can automatically stop and start Amazon EC2 instances using AWS Lambda.
  • kinesis-deaggregation is a set of AWS Lambda modules for working with the Kinesis Producer Library.
  • slackBot is a slackBot featuring AWS Lambda.
  • aws-lambda-scala-example-product is an AWS Lambda function in Scala reading events from Amazon Kinesis and writing event counts to DynamoDB.
  • popeye will generate an authorized_keys file from users stored in AWS IAM.
  • AwsProxy is a proxy for the AWS SDK for PHP.
  • aws-s3-mount can mount an s3 folder into a container and export it as a volume.
  • backbeam-lambda is a set of development tools for creating web applications based on AWS Lambda.
  • rom-dynamo is an AWS DynamoDB adapter for Ruby Object Mapper.
  • lambda-refarch-iotbackend is an AWS Lambda reference architecture for creating an IoT backend.

New SlideShare Presentations

New Customer Success Stories

  • FINRA -By migrating to AWS, FINRA—the Financial Industry Regulatory Authority—has created a flexible platform that can adapt to changing market dynamics while providing its analysts with the tools to interactively query multi-petabyte data sets.
  • Redfin – By using AWS, Redfin can innovate quickly and cost effectively with a small IT staff while managing billions of property records.
  • Robinhood – Robinhood’s lean staff, including just two DevOps people, used AWS to create a massively scalable securities trading app with strong built-in security and compliance features that supported hundreds of thousands of users at launch.
  • Zynga – By returning to AWS, Zynga is gaining greater agility, lower costs, and the freedom to experiment with new solutions to deliver world-class game experiences.

New YouTube Videos

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

Upcoming Events at the AWS Loft (New York)

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

AWS – Ready to Weather the Storm

by Jeff Barr | on | in Architecture, Customer Success | | Comments

As people across the Northeastern United States are stocking up their pantries and preparing their disaster supplies kits, AWS is also preparing for winter snow storms and the subsequent hurricane season. After fielding several customer requests for information about our preparation regime, my colleagues Brian Beach and Ilya Epshteyn wrote the following guest post in order to share some additional information.

Jeff;

AWS takes extensive precautions to help ensure that we will remain fully operational, with no loss of service for our hosted applications even during a major weather event or natural disaster.  How reliable is an application hosted by AWS? In 2014, Nucleus Research surveyed 198 AWS customers that reported moving existing workloads from on-premises to AWS and found that they were able to reduce unplanned downtime by 32% (see Availability and Reliability in the Cloud: Amazon Web Services for more info).

AWS replicates critical system components across multiple Availability Zones to ensure high availability both under normal circumstances and during disasters such as fires, tornadoes, or floods.  Our services are available to customers from 12 regions in the United States, Brazil, Europe, Japan, Singapore, Australia, Korea, and China with 32 Availability Zones.  Each Availability Zone runs on its own independent infrastructure, engineered to be highly reliable so that even extreme disasters or weather events should only affect a single Availability Zone. The datacenters’ electrical power systems are designed to be fully redundant and maintainable without impact to operations. Common points of failure, such as generators, UPS units, and air conditioning, are not shared across Availability Zones.

At AWS, we plan for failure by maintaining contingency plans and regularly rehearsing our responses. In the words of Werner Vogels, Amazon’s CTO: “Everything fails, all the time.”  We regularly perform preventative maintenance on our generators and UPS units to ensure that the equipment is ready when needed.  We also maintain a series of incident response plans covering both common and uncommon events and update them regularly to incorporate lessons learned and prepare for emerging threats.  In the days leading up to a known event such as a hurricane, we make preparations such as increasing fuel supplies, updating staffing plans, and adding provisions like food and water to ensure the safety of the support teams.  Once it is clear that a storm will impact a specific region, the response plan is executed and we post updates to the Service Health Dashboard throughout the event.

During Hurricane Sandy—the most destructive hurricane of the 2012 Atlantic hurricane season, and the second-costliest hurricane in United States history— AWS remained online throughout the entire storm. An extensive Hurricane Sandy Response Plan, including 24/7 staffing by all service teams, escalation plans and continuous status updates, assured normal operations and service quality for our customers.

In fact, AWS’s highly reliable platform also played a key role in enabling a more effective storm response.  A&T Systems (ATS.com), an AWS Advanced Consulting partner, used AWS in support of a statewide emergency management agency as Hurricane Sandy struck.  Another AWS customer, MapBox, provided maps for several storm-related services to help predict and track Sandy’s progression, communicate evacuation plans, and track surges.

In the aftermath of the storm, some companies established operations in the AWS Cloud to replace datacenters lost to flooding and power outages. One such example is NYU’s Langone Medical Center. As noted in the article Still Recovering from Sandy, “…NYU researchers [were] able to push forward with their sequencing experiments. They were able to salvage 200 terabytes of backup sequencing data, and have set up temporary data storage in a New Jersey facility, using computing power from the NYU Center for Genomics and Systems Biology and the Amazon cloud.”

What’s even more interesting is that AWS provided a unique capability for our customers to prepare for worst case scenarios by copying and replicating their data to other AWS regions proactively.  Although ultimately this was not necessary, since US East (Northern Virginia) stayed up without any issues, our customers had peace of mind that they would be able to continue their business as usual even if it did fail.  One example is the Obama 2012 Campaign: in a nine hour period, they proactively replicated their entire environment from the US East (Northern Virginia) to the US West (Northern California) region, providing cross-continent fault tolerance on demand.  The Obama campaign was able to copy over 27 terabytes of data from East to West in less than four hours (watch the re:Invent video, Continuous Integration and Deployment Best Practices on AWS, to learn more).  Leo Zhadanovsky, a DevOps engineer for the Obama Campaign & Democratic National Committee, who now works for AWS commented that “AWS’s scalable, on-demand capacity allowed Obama for America to quickly spin up a disaster-recovery copy of their infrastructure in another region in a matter of hours — something that would normally take weeks, or months in on premise environment.”

While AWS goes to great lengths to provide availability of the cloud, our customers share responsibility for ensuring availability within the cloud.  These customers and others like them have succeeded because they designed for failure and have adopted best practices for high availability, such as taking advantage of multiple Availability Zones and configuring Auto Scaling groups to replace unhealthy instances. The Building Fault-Tolerant Applications on AWS whitepaper is a great introduction to achieving high availability in the cloud. In addition, the AWS Well-Architected Framework codifies the experiences of thousands of customers, helping customers assess and improve their cloud-based architectures and mitigate disruptions.

As winter storms threaten the East Coast, AWS customers can rest assured that our Services and Availability Zones provide the most solid foundation upon which to build a reliable application. Together, we can build a highly available and resilient application in the cloud, ready to weather the storm.

Brian Beach (Cloud Architect) and Ilya Epshteyn (Solutions Architect)

 

New – AWS Certificate Manager – Deploy SSL/TLS-Based Apps on AWS

by Jeff Barr | on | in AWS Certificate Manager, Security | | Comments

I am fascinated by things that are simple on the surface and complex underneath! For example, consider the popular padlock icon that is used to signify that traffic to and from a web site is encrypted:

How does the browser know that it should display the green padlock? Well, that’s quite the story! It all starts with a digital file known as an SSL/TLS certificate.  This is an electronic document that is used to establish identity and trust between two parties. In this case, the two parties are the web site and the web browser.

SSL/TLS is a must-have whenever sensitive data is moved back and forth. For example, sites that need to meet compliance requirements such as PCI-DSS, FedRAMP, and HIPAA make extensive use of SSL/TLS.

Certificates are issued to specific domains by Certificate Authorities, also known as CAs. When you want to obtain a certificate for your site, the CA will confirm that you are responsible for the domain. Then it will issue a certificate that is valid for a specific amount of time, and only for the given domain (subdomains are allowed). Traditionally, you were also responsible for installing the certificate on your system, tracking expiration dates, and getting fresh certificates from time to time (typically, certificates are valid for a period of 12 months).

Each certificate is digitally signed; this allows the browser to verify that it was issued by a legitimate CA. To be a bit more specific, browsers start out with a small, predefined list of root certificates and use them to verify that the other certificates can be traced back to the root. You can access this information from your browser:

As you can probably see from what I have outlined above (even though I have hand-waved past a lot of interesting details), provisioning and managing SSL/TLS certificates can entail a lot of work, far too much of it manual and not easily automated. In many cases you also need to pay an annual fee for each certificate.

Time to change that!

New AWS Certificate Manager
The new AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with management of SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates! Certificates provided by ACM are verified by Amazon’s certificate authority (CA), Amazon Trust Services (ATS).

Even better, you can do all of this at no extra cost. SSL/TLS certificates provisioned through AWS Certificate Manager are free!

ACM will allow you to start using SSL in a matter of minutes. After your request a certificate, you can deploy it to your Elastic Load Balancers and your Amazon CloudFront distributions with a couple of clicks. After that, ACM can take care of the periodic renewals without any action on your part.

Provisioning and Deploying a Certificate
Let’s step through the process of provisioning and deploying a digital certificate using the console (APIs are also available). I’ll use one of my own domains (jeff-barr.com) for this exercise. I start by opening the AWS Certificate Manager Console and clicking on Get started.

Then I enter the domain name of the site that I want to secure. In this case I want to secure the “naked” domain and all of the first-level sub-domains within it:

Then I review my request and confirm my intent:

I flip over to my inbox, find the email or emails (one per domain) from Amazon (certificates.amazon.com), and click on Amazon Certificate Approvals:

I visit the site and click on I Approve:

And that’s all it takes! The certificate is now visible in the console:

Deploying the Certificate
After the certificate is issued, I can deploy it to my Elastic Load Balancers and/or CloudFront distributions.

Because ELB supports SSL offload, deploying a certificate to a load balancer (rather than to the EC2 instances behind it) will reduce the amount of encryption and decryption work that the instances need to handle.

And for a CloudFront distribution:

Available Now
AWS Certificate Manager (ACM) is available now in the US East (Northern Virginia) region, with additional regions in the works. You can provision, deploy, and renew certificates at no charge.

We plan to add support for other AWS services and for other types of domain validation. As always, your suggestions and feedback are more than welcome and will help us to prioritize our work.

If you are using AWS Elastic Beanstalk, take a look at Enabling SSL/TLS (for free) via AWS Certificate Manager.

Jeff;

120 Uses for Your Empty Data Center

by Jeff Barr | on | in Fun | | Comments

It is big. It is cold. It is secure. And now it is empty, because you have gone all-in to the AWS Cloud. So, what do you do with your data center? Once the pride and joy of your IT staff, it is now a stark, expensive reminder that the world has changed.

Many AWS customers are migrating from their existing on-premises data centers to AWS. Here are just a few of their stories (links go to case studies, blog posts, and videos from re:Invent):

  • Delaware North – This peanut and popcorn vendor reduced their server footprint by more than 90% and expects to save over $3.5 million in IT acquisition and maintenance costs over five years by using AWS.
  • Seaco – This global sea-container leasing company implemented the SAP Business Suite on AWS and reduced latency by more than 90%.
  • Kaplan – This education and test prep company moved a set of development, test, staging, and production environments that once spanned 12 separate data centers to AWS, eliminating 8 of the 12 in the process.
  • Talen Energy – As part of a divestiture, this nuclear power company decided to move to AWS and found that they were able to focus more of their energy on their core business.
  • Condé Nast – This well-known publisher migrated over 500 servers and 1 petabyte of storage to AWS and went all-in.
  • Hearst Corporation – This diversified communications company migrated 10 of their 29 data centers to AWS.
  • University of Notre Dame – This university has already migrated its web site to AWS and plans to move 80% of the remaining workloads in the next three years.
  • Capital One – This finance company has made AWS a central part of its technology strategy.
  • General Electric – This diversified company is migrating more than 9,000 workloads to AWS, while closing 30 out of 34 data centers.

If you are curious, here’s what happens when a data center closes down:

Now What?
You may be wondering what you are supposed to do with all of that cold, empty space after your migration!

With generous contributions from my colleagues (they did 85% of the work), I have compiled a list of 120 possible uses for your data center. For your reading pleasure, I have arranged them by category. As you can see, my colleagues are incredibly imaginative! There is some overlap here, but I didn’t want to play favorites. So, here you go…

Sports and Recreation

  1. Ice hockey rink.
  2. Whirlyball arena.
  3. Go-kart track.
  4. Snowshoe practice area.
  5. Laser tag.
  6. Paintball arena.
  7. Sweat lodge.
  8. Hot yoga studio.
  9. Immersive VR gaming arena.
  10. Largest paper football game. Ever.
  11. Portal arena.
  12. Paint walls black. Turn off lights. Dress in black. Water balloon fight with fluorescent paint.
  13. Extreme weather survival training.
  14. Giant domino rally.
  15. Venue for world’s longest paper airplane flight.
  16. twitch.tv live video game championship arenas.
  17. Bubble football arena.
  18. World’s largest ball pit.
  19. Ultimate LAN party room.
  20. Indoor lazy river.
  21. Indoor ski resort.
  22. Shooting range.
  23. All weather theme park – “Datacenter Land.”
  24. Segway racing track.
  25. Indoor surf park.
  26. Massive spinning studio.
  27. Create the World Trampoline Wallyball League (WTWL).
  28. Ultimate Boda Borg quest challenge.
  29. Indoor hang gliding wind tunnel.
  30. 48 state of the art gyms.
  31. Indoor dog park.
  32. Zamboni driver training facility.

Food and Beverages

  1. A large, cold area is perfect for growing and preserving food.
  2. Meat locker / meat packing facility.
  3. Popsicle factory.
  4. Wine cellar.
  5. Penguin sanctuary.
  6. Mushroom farm.
  7. Cheese grotto.
  8. Practice area for growing potatoes on Mars.
  9. World’s largest Easy Bake Oven.

Eminently Practical

  1. Classroom.
  2. Electric car charging station.
  3. Storm shelter.
  4. Drone zone.
  5. Giant pencil case.
  6. Bomb shelter.
  7. Car wash.
  8. Community theater.
  9. Art gallery.
  10. Secure storage for all the money saved by using AWS.
  11. Cloud University.
  12. Solar power generation plant.
  13. 48 really large Airbnb opportunities.
  14. Maker‘s den.

Space, the Final Frontier

  1. Blimp Hanger with “Moffett Field, You Got Nothing on Me!” painted on the side.
  2. UFO storage.
  3. Fill with water and use as a NASA zero-gravity training facility.
  4. Time portal for the Restaurant at the end of the Internet.
  5. Blue Origin space terminal with interplanetary duty free zone.
  6. Use racks as studio apartments in San Francisco. Rent at $5000/month.

Dead or Alive

  1. Homeless shelter.
  2. Orphanage / charity home.
  3. Morgue.
  4. Cryogenic human storage.
  5. Zombie apocalypse refuge.
  6. Cold therapy spa.
  7. Rehabilitation center.
  8. Snowman preservation facility.

Just Plain Weird

  1. Unicorn farm.
  2. Mattress testing facility.
  3. Biohazard isolation area.
  4. Stress-relief shattering emporium.
  5. Grow operation (where permitted by law).
  6. Military parade ground.
  7. Venue for all 2016 US presidential debates.
  8. Super ball testing facility.
  9. Corporate meditation center.
  10. Duck echo testing facility.
  11. Automated paper making factory for paper towels and toilet paper.
  12. Tour facility for worlds largest ball of Ethernet or fiber optic cable.
  13. Robot cat toy factory for Ethernet / fiber cable yarn balls.
  14. Storage for recently unearthed E.T. The Extra-Terrestrial game cartridges.
  15. Giant sensory deprivation tank.
  16. Biosphere 3.
  17. “Can you hear me now?” test facility.

TV and Movies

  1. Derek Zoolander Center for Kids Who Can’t Read Good.
  2. Battle of Hoth reenactment.
  3. Trash compactor where we can put people who reveal Star Wars spoilers.
  4. American Gladiator or American Ninja Warrior arena.
  5. Hangar for Death Star.
  6. Fill it with water. Re-enact Finding Nemo.
  7. Take old printers and re-enact scene from Office Space. Call it a Silent Meditation Retreat.
  8. Erect transparent aluminum walls, fill with water, store whales for when aliens come to contact them.
  9. Raiders of the Lost Ark warehouse.
  10. Training center for the Knights of Ren.
  11. Mythbusters science lab.
  12. Top Gear secret race track.
  13. Mad Max: Server Room Rampage.
  14. High security storage facility for broken down Daleks.
  15. Tardis repair center for all things wibley wobley or timey wimey.

Uniquely Amazonian

  1. Venue for next re:Play party.
  2. Amazon fulfillment center.
  3. Alternate venue for re:Invent 2017.
  4. AWS Import/Export Snowball processing facility.
  5. Amazon Fresh greeenhouse.
  6. Brewery for the Amazon Simple Beer Service.
  7. Amazon Locker site.
  8. Actual cloud storage (leave the A/C on and pipe in some steam).

It’s Dead, Jim

  1. Electronics shredding center.
  2. Warehouse used to refurbish decommissioned corporate computers/servers to deploy to underprivileged schools world wide for education.
  3. Venue to host auction for empty data centers.
  4. Server to host the auction website for empty data centers.
  5. Museum of data center history.
  6. “Co-location Data Center” – National Trust for Historic Preservation.
  7. Outreach centers for all those IT Admins that claimed they would never go all-in in the cloud.
  8. Retro-style storage for paper files.
  9. Data center resort with gardens grown on servers.
  10. Buggy whip factory.
  11. Museum of technology history.
Jeff;

 

PS – Please feel free to leave suggestions for additional uses in the comments.