AWS Blog

Amazon ElastiCache Update – Export Redis Snapshots to Amazon S3

by Jeff Barr | on | in Amazon ElastiCache, Amazon S3 | | Comments

Amazon ElastiCache supports the popular Memcached and Redis in-memory caching engines. While Memcached is generally used to cache results from a slower, disk-based database, Redis is used as a fast, persistent key-value store. It uses replicas and failover to support high availability, and natively supports the use of structured values.

Today I am going to focus on a helpful new feature that will be of interest to Redis users. You already have the ability to create snapshots of a running Cache Cluster. These snapshots serve as a persistent backup, and can be used to create a new Cache Cluster that is already loaded with data and ready to go. As a reminder, here’s how you create a snapshot of a Cache Cluster:

You can now export your Redis snapshots to an S3 bucket. The bucket must be in the same Region as the snapshot and you need to grant ElastiCache the proper permissions (List, Upload/Delete, and View Permissions) on it. We envision several uses for this feature:

Disaster Recovery – You can copy the snapshot to another environment for safekeeping.

Analysis – You can dissect and analyze the snapshot in order to understand usage patterns.

Seeding – You can use the snapshot to seed a fresh Redis Cache Cluster in another Region.

Exporting a Snapshot
To export a snapshot, simply locate it, select it, and click on Copy Snapshot:

Verify the permissions on the bucket (read Exporting Your Snapshot to learn more):

Then enter a name and select the desired bucket:

ElastiCache will export the snapshot and it will appear in the bucket:

The file is a standard Redis RDB file, and can be used as such.

You can also exercise this same functionality from your own code or via the command line. Your code can call CopySnapshot while specifying the target S3 bucket. Your scripts can use the  copy-snapshot command.

This feature is available now and you can start using it today! There’s no charge for the export; you’ll pay the usual S3 storage charges.

Jeff;

 

Amazon Elastic Transcoder Update – Support for MPEG-DASH

by Jeff Barr | on | in Amazon Elastic Transcoder | | Comments

Amazon Elastic Transcoder converts media files (audio and video) from one format to another. The service is robust, scalable, cost-effective, and easy to use. You simply create a processing pipeline (pointing to a pair of S3 buckets for input and output in the process), and then create transcoding jobs. Each job reads a specific file from the input bucket, transcodes it to the desired format(s) as specified in the job, and then writes the output to the output bucket. You pay for only what you transcode, with price points for Standard Definition (SD) video, High Definition (HD) video, and audio. We launched the service with support for an initial set of transcoding presets (combinations of output formats and relevant settings). Over time, in response to customer demand and changes in encoding technologies, we have added additional presets and formats. For example, we added support for the VP9 Codec earlier this year.

Support for MPEG-DASH
Today we are adding support for transcoding to the MPEG-DASH format. This International Standard format supports high-quality audio and video streaming from HTTP servers, and has the ability to adapt to changes in available network throughput using a technique known as adaptive streaming. It was designed to work well across multiple platforms and at multiple bitrates, simplifying the transcoding process and sidestepping the need to create output in multiple formats.

During the MPEG-DASH transcoding process, the content is transcoded into segmented outputs at the different bitrates and a playlist is created that references these outputs. The client (most often a video player) downloads the playlist to initiate playback. Then it monitors the effective network bandwidth and latency, requests video segments as needed. If network conditions change during the playback process, the player will take action, upshifting or downshifting as needed.

You can serve up the transcoded content directly from S3 or you can use Amazon CloudFront to get the content even closer to your users. Either way, you need to create a CORS policy that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

If you are using CloudFront, you need to enable the OPTIONS method, and allow it to be cached:

You also need to add three headers to the whitelist for the distribution:

Transcoding With MPEG-DASH
To make use of the adaptive bitrate feature of MPEG-DASH, you create a single transcoding job and specify multiple outputs, each with a different preset. Here are your choices (4 for video and 1 for audio):

When you use this format, you also need to choose a suitable segment duration (in seconds). A shorter duration produces a larger number of smaller segments and allows the client to adapt to changes more quickly.

You can create a single playlist that contains all of the bitrates, or you can choose the bitrates that are most appropriate for your customers and your content. You can also create your own presets, using an existing one as a starting point:

Available Now
MPEG-DASH support is available now in all Regions where Amazon Elastic Transcoder is available. There is no extra charge for this use of this format (see Elastic Transcoder Pricing to learn more).

Jeff;

 

Amazon Redshift – Up to 2X Throughput and 10X Vacuuming Performance Improvements

by Jeff Barr | on | in Amazon Redshift | | Comments

My colleague Maor Kleider wrote today’s guest post!

Jeff;

Amazon Redshift, AWS’s fully managed data warehouse service, makes petabyte-scale data analysis fast, cheap, and simple. Since launch, it has been one of AWS’s fastest growing services, with many thousands of customers across many industries. Enterprises such as NTT DOCOMO, NASDAQ, FINRA, Johnson & Johnson, Hearst, Amgen, and web-scale companies such as Yelp, Foursquare and Yahoo! have made Amazon Redshift a key component of their analytics infrastructure.

In this blog post, we look at performance improvements we’ve made over the last several months to Amazon Redshift, improving throughput by more than 2X and vacuuming performance by 10X.

Column Store
Large scale data warehousing is largely an I/O problem, and Amazon Redshift uses a distributed columnar architecture to minimize and parallelize I/O. In a column-store, each column of a table is stored in its own data block. This reduces data size, since we can choose compression algorithms optimized for each type of column. It also reduces I/O time during queries, because only the columns in the table that are being selected need to be retrieved.

However, while a column-store is very efficient at reading data, it is less efficient than a row-store at loading and committing data, particularly for small data sets. In patch 1.0.1012 (December 17, 2015), we released a significant improvement to our I/O and commit logic. This helped with small data loads and queries using temporary tables. While the improvements are workload-dependent, we estimate the typical customer saw a 35% improvement in overall throughput.

Regarding this feature, Naeem Ali, Director of Software Development, Data Science at Cablevision, told us:

Following the release of the I/O and commit logic enhancement, we saw a 2X performance improvement on a wide variety of workloads. The more complex the queries, the higher the performance improvement.

Improved Query Processing
In addition to enhancing the I/O and commit logic for Amazon Redshift, we released an improvement to the memory allocation for query processing in patch 1.0.1056 (May 17, 2016), increasing overall throughput by up to 60% (as measured on standard benchmarks TPC-DS, 3TB), depending on the workload and the number of queries that spill from memory to disk. The query throughput improvement increases with the number of concurrent queries, as less data is spilled from memory to disk, reducing required I/O.

Taken together, these two improvements, should double performance for customer workloads where a portion of the workload contains complex queries that spill to disk or cause temporary tables to be created.

Better Vacuuming
Amazon Redshift uses multi-version concurrency control to reduce contention between readers and writers to a table. Like PostgreSQL, it does this by marking old versions of data as deleted and new versions as inserted, using the transaction ID as a marker. This allows readers to build a snapshot of the data they are allowed to see and traverse the table without locking. One issue with this approach is the system becomes slower over time, requiring a vacuum command to reclaim the space. This command reclaims the space from deleted rows and ensures new data that has been added to the table is placed in the right sorted order.

We are releasing a significant performance improvement to vacuum in patch 1.0.1056, available starting May 17, 2016. Customers previewing the feature have seen dramatic improvements both in vacuum performance and overall system throughput as vacuum requires less resources.

Ari Miller, a Principal Software Engineer at TripAdvisor, told me:

We estimate that the vacuum operation on a 15TB table went about 10X faster with the recent patch, ultimately improving overall query performance.

 You can query the VERSION function to verify that you are running at the desired patch level.

Available Now
Unlike on-premise data warehousing solutions, there are no license or maintenance fees for these improvements or work required on your part to obtain them. They simply show up as part of the automated patching process during your maintenance window.

Maor Kleider, Senior Product Manager, Amazon Redshift

 

EC2 Instance Console Screenshot

by Jeff Barr | on | in Amazon EC2 | | Comments

When our users move existing machine images to the cloud for use on Amazon EC2, they occasionally encounter issues with drivers, boot parameters, system configuration settings, and in-progress software updates. These issues can cause the instance to become unreachable via RDP (for Windows) or SSH (for Linux) and can be difficult to diagnose. On a traditional system, the physical console often contains log messages or other clues that can be used to identify and understand what’s going on.

In order to provide you with additional visibility into the state of your instances, we now offer the ability to generate and capture screenshots of the instance console. You can generate screenshots while the instance is running or after it has crashed.

Here’s how you generate a screenshot from the console (the instance must be using HVM virtualization):

And here’s the result:

It can also be used for Windows instances:

You can also create screenshots using the CLI (aws ec2 get-console-screenshot) or the EC2 API (GetConsoleScreenshot).

Available Now
This feature is available today in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (Brazil) Regions. There are no costs associated with it.

Jeff;

 

New AWS Quick Start Reference Deployment – Standardized Architecture for PCI DSS

by Jeff Barr | on | in Quick Start | | Comments

If you build an application that processes credit card data, you need to conform to PCI DSS (Payment Card Industry Data Security Standard). Adherence to the standard means that you need to meet control objectives for your network, protect cardholder data, implement strong access controls, and more.

In order to help AWS customers to build systems that conform to PCI DSS, we are releasing a new Quick Start Reference Deployment. The new Standardized Architecture for PCI DSS on the AWS Cloud (PDF or HTML) includes a AWS CloudFormation template that deploys a standardized environment that falls in scope for PCI DSS compliance (version 3.1).

The template describes a stack that deploys a multi-tiered Linux-based web application in about 30 minutes. It makes use of child templates, and can be customized as desired. It launches a pair of Virtual Private Clouds (Management and Production) and can accommodate a third VPC for development:

The template sets up the IAM items (policies, groups, roles, and instance profiles), S3 buckets (encrypted web content, logging, and backup), a Bastion host for troubleshooting and administration, an encrypted RDS database instance running in multiple Availability Zones, and a logging / monitoring / alerting package that makes use of AWS CloudTrail, Amazon CloudWatch, and AWS Config Rules. The architecture supports a wide variety of AWS best practices (all of which are detailed in the document) including use of multiple Availability Zones, isolation using public and private subnets, load balancing, auto scaling, and more.

You can use the template to set up an environment that you can use for learning, as a prototype, or as the basis for your own template.

The Quick Start also includes a Security Controls Reference. This document maps the security controls called out by PCI DSS to the relevant architecture decisions, features, and configurations.

Jeff;

PS – Check out our other AWS Enterprise Accelerator Quick Starts!

 

 

Arduino Web Editor and Cloud Platform – Powered by AWS

by Jeff Barr | on | in Announcements, AWS Lambda, Internet of Things | | Comments

Last night I spoke with Luca Cipriani from Arduino to learn more about the new AWS-powered Arduino Web Editor and Arduino Cloud Platform offerings. Luca was en-route to the Bay Area Maker Faire and we had just a few minutes to speak, but that was enough time for me to learn a bit about what they have built.

If you have ever used an Arduino, you know that there are several steps involved. First you need to connect the board to your PC’s serial port using a special cable (you can also use Wi-Fi if you have the appropriate add-on “shield”), ensure that the port is properly configured, and establish basic communication. Then you need to install, configure, and launch your development environment, make sure that it can talk to your Arduino, tell it which make and model of Arduino that you are using, and select the libraries that you want to call from your code. With all of that taken care of, you are ready to write code, compile it, and then download it to the board for debugging and testing.

Arduino Code Editor
Luca told me that the Arduino Code Editor was designed to simplify and streamline the setup and development process. The editor runs within your browser and is hosted on AWS (although we did not have time to get in to the details, I understand that they made good use of AWS Lambda and several other AWS services).

You can write and modify your code, save it to the cloud and optionally share it with your colleagues and/or friends. The editor can also detect your board (using a small native plugin) and configure itself accordingly; it even makes sure that you can only write code using libraries that are compatible with your board. All of your code is compiled in the cloud and then downloaded to your board for execution.

Here’s what the editor looks like (see Sneak Peek on the New, Web-Based Arduino Create for more):

Arduino Cloud Platform
Because Arduinos are small, easy to program, and consume very little power, they work well in IoT (Internet of Things) applications. Even better, it is easy to connect them to all sorts of sensors, displays, and actuators so that they can collect data and effect changes.

The new Arduino Cloud Platform is designed to simplify the task of building IoT applications that make use of Arduino technology. Connected devices will be able to be able to connect to the Internet, upload information derived from sensors, and effect changes upon command from the cloud. Building upon the functionality provided by AWS IoT, this new platform will allow devices to communicate with the Internet and with each other. While the final details are still under wraps, I believe that this will pave the wave for sensors to activate Lambda functions and for Lambda functions to take control of displays and actuators.

I look forward to learning more about this platform as the details become available!

Jeff;

 

AWS Accelerator for Citrix – Migrate or Deploy XenApp & XenDesktop to the Cloud

by Jeff Barr | on | | Comments

If you are running Citrix XenApp, XenDesktop and/or NetScaler on-premises  and are interested in moving to the AWS Cloud, I have a really interesting offer for you!

In cooperation with our friends at Citrix (an Advanced APN Technology Partner), we have assembled an AWS Accelerator to help you to plan and execute a successful trial migration while using your existing licenses. The migration process makes use of the new Citrix Lifecycle Management (CLM) tool. CLM includes a set of proven migration blueprints that will help you to move your existing deployment to AWS. You can also deploy the XenApp and XenDesktop Service using Citrix Cloud, and tap CLM to manage your AWS-based resources.

Here’s the Deal
The AWS Accelerator lets you conduct a 25-user trial migration / proof of concept over a 60 day period. During that time you can use CLM to deploy XenApp, XenDesktop, and NetScaler on AWS per the reference architecture and a set of best practices. We will provide you with AWS Credit ($5000) and Citrix will provide you with access to CLM. A select group of joint AWS and Citrix launch partners will deliver the trials with the backing and support of technical and services teams from both companies.

Getting Started
Here’s what you need to do to get started:

  1. Contact your AWS (email us) or Citrix account team and ask to join the AWS Accelerator.
  2. Submit your request in order to be considered for Amazon EC2 credits and a trial of Citrix CLM.
  3. Create an AWS account if you don’t already have one.

After you do this, follow the steps in the Citrix blueprint (Deploy the XenApp and XenDesktop Proof of Concept blueprint with NetScaler to AWS) to build your proof-of-concept environment.

Multiple AWS Partners are ready, willing, and able to help you to work through the blueprint and to help you to tailor it to the needs of your organization. The AWS Accelerator Launch Services Partners include Accenture, Booz Allen Hamilton, CloudNation, Cloudreach, Connectria, Equinix (EPS Cloud), REAN Cloud, and SSI-Net. Our Launch Direct Connect partner is Level 3.

Learn More at Synergy
AWS will be sponsoring Citrix Synergy next week in Las Vegas and will be at booth #770. Citrix will also be teaching a hands on lab (SYN618) based on the AWS Accelerator program on Monday May 23rd at 8 AM. If you are interested in learning more please sign up for the hands on lab or stop by the booth and say hello to my colleagues!

Jeff;

 

New – Cross-Account Snapshot Sharing for Amazon Aurora

by Jeff Barr | on | in Amazon Aurora | | Comments

Amazon Aurora is a high-performance, MySQL-compatible database engine. Aurora combines the speed and availability of high-end commercial databases with the simplicity and cost-effective of open source databases (see my post, Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS, to learn more). Aurora shares some important attributes with the other database engines that are available for Amazon RDS including easy administration, push-button scalability, speed, security, and cost-effectiveness.

You can create a snapshot backup of an Aurora cluster with just a couple of clicks. After you have created a snapshot, you can use it to restore your database, once again with a couple of clicks.

Share Snapshots
Today we are giving you the ability to share your Aurora snapshots. You can share them with other AWS accounts and you can also make them public. These snapshots can be used to restore the database to an Aurora instance running in a separate AWS account in the same Region as the snapshot.

There are several primary use cases for snapshot sharing:

Separation of Environments – Many AWS customers use separate AWS accounts for their development, test, staging, and production environments. You can share snapshots between these accounts as needed. For example, you can generate the initial database in your staging environment, snapshot it, share the snapshot with your production account, and then use it to create your production database. Or, should you encounter an issue with your production code or queries, you can create a snapshot of your production database and then share it with your test account for debugging and remediation.

Partnering – You can share database snapshots with selected partners on an as-needed basis.

Data Dissemination -If you are running a research project, you can generate snapshots and then share them publicly. Interested parties can then create their own Aurora databases using the snapshots, using your work and your data as a starting point.

To share a snapshot, simply select it in the RDS Console and click on Share Snapshot. Then enter the target AWS account (or click on Public to share the snapshot publicly) and click on Add:

You can share manually generated, unencrypted snapshots with other AWS accounts or publicly. You cannot share automatic snapshots or encrypted snapshots.

The shared snapshot becomes visible in the other account right away:

Public snapshots are also visible (select All Public Snapshots as the Filter):

Available Now
This feature is available now and you can start using it today.

Jeff;

X1 Instances for EC2 – Ready for Your Memory-Intensive Workloads

by Jeff Barr | on | in Amazon EC2 | | Comments

Many AWS customers are running memory-intensive big data, caching, and analytics workloads and have been asking us for EC2 instances with ever-increasing amounts of memory.

Last fall, I first told you about our plans for the new X1 instance type. Today, we are announcing availability of this instance type with the launch of the x1.32xlarge instance size. This instance has the following specifications:

  • Processor: 4 x Intel™ Xeon E7 8880 v3 (Haswell) running at 2.3 GHz – 64 cores / 128 vCPUs.
  • Memory: 1,952 GiB with Single Device Data Correction (SDDC+1).
  • Instance Storage: 2 x 1,920 GB SSD.
  • Network Bandwidth: 10 Gbps.
  • Dedicated EBS Bandwidth: 10 Gbps (EBS Optimized by default at no additional cost).

The Xeon E7 processor supports Turbo Boost 2.0 (up to 3.1 GHz), AVX 2.0AES-NI, and the very interesting (to me, anyway) TSX-NI instructions. AVX 2.0 (Advanced Vector Extensions) can improve performance on HPC, database, and video processing workloads; AES-NI improves the speed of applications that make use of AES encryption. The new TSX-NI instructions support something cool called transactional memory. The instructions allow highly concurrent, multithreaded applications to make very efficient use of shared memory by reducing the amount of low-level locking and unlocking that would otherwise be needed around each memory access.

If you are ready to start using the X1 instances in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), or Asia Pacific (Sydney) Regions, please request access and we’ll get you going as soon as possible. We have plans to make the X1 instances available in other Regions and in other sizes before too long.

3-year Partial Upfront Reserved Instance Pricing starts at $3.970 per hour in the US East (Northern Virginia) Region; see the EC2 Pricing page for more information. You can purchase Reserved Instances and Dedicated Host Reservations today; Spot bidding is on the near-term roadmap.

Here are some screen shots of an x1.32xlarge in action. lscpu shows that there are 128 vCPUs spread across 4 sockets:

On bootup, the kernel reports on the total accessible memory:

The top command shows a huge number of running processes and lots of memory:

Ready for Enterprise-Scale SAP Workloads
The X1 instances have been certified by SAP for production workloads. They meet the performance bar for SAP OLAP and OLTP workloads backed by SAP HANA.

You can migrate your on-premises deployments to AWS and you can also start fresh. Either way, you can run S/4HANA, SAP’s next-generation Business Suite, as well as earlier versions.

Many AWS customers are currently running HANA in scale-out fashion across multiple R3 instances. Many of these workloads can now be run on a single X1 instance. This configuration will be simpler to set up and less expensive to run. As I mention below, our updated SAP HANA Quick Start will provide you with more information on your configuration options.

Here’s what SAP HANA Studio looks like when run on an X1 instance:

You have several interesting options when it comes to disaster recovery (DR) and high availability (HA) when you run your SAP HANA workloads on an X1 instance. For example:

  • Auto Recovery – Depending on your RPO (Recovery Point Objective) and RTO (Recovery Time Objective), you may be able to use a single instance in concert with EC2 Auto Recovery.
  • Hot Standby – You can run X1 instances in 2 Availability Zones and use HANA System Replication to keep the spare instance in sync.
  • Warm Standby / Manual Failover – You can run a primary X1 instance and a smaller secondary instance configured to persist only to permanent storage.  In the event that a failover is necessary, you stop the secondary instance, modify the instance type to X1, and reboot. This unique, AWS-powered option will give you quick recovery while keeping costs low.

We have updated our HANA Quick Start as part of today’s launch. You can get SAP HANA running in a new or existing VPC within an hour using a well-tested configuration:

The Quick Start will help you to configure the instance and the associated storage, install the requisite operating system packages, and to install SAP HANA.

We have also released a SAP HANA Migration Guide. It will help you to migrate your existing on-premises or AWS-based SAP HANA workloads to AWS.

Jeff;

I Love My Amazon WorkSpace!

by Jeff Barr | on | in Amazon WorkDocs, Amazon WorkSpaces | | Comments

Early last year my colleague Steve Mueller stopped by my office to tell me about an internal pilot program that he thought would be of interest to me. He explained that they were getting ready to run Amazon WorkSpaces on the Amazon network and offered to get me on the waiting list. Of course, being someone that likes to live on the bleeding edge, I accepted his offer.

Getting Started
Shortly thereafter I started to run the WorkSpaces client on my office desktop, a fairly well-equipped PC with two screens and plenty of memory. At that time I used the desktop during the working day and a separate laptop when I was traveling or working from home. Even though I used Amazon WorkDocs to share my files between the two environments, switching between them caused some friction. I had distinct sets of browser tabs, bookmarks, and the like. No matter how much I tried, I could never manage to keep the configurations of my productivity apps in sync across the environments.

After using the WorkSpace at the office for a couple of weeks, I realized that it was just as fast and responsive as my desktop. Over that time, I made the WorkSpace into my principal working environment and slowly severed my ties to my once trusty desktop.

I work from home two or three days per week. My home desktop has two large screens, lots of memory, a top-notch mechanical keyboard, and runs Ubuntu Linux. I run VirtualBox and Windows 7 on top of Linux. In other words, I have a fast, pixel-rich environment.

Once I was comfortable with my office WorkSpace, I installed the client at home and started using it there. This was a giant leap forward and a great light bulb moment for me. I was now able to use my fast, pixel-rich home environment to access my working environment.

At this point you are probably thinking that the combination of client virtualization and server virtualization must be slow, laggy, or less responsive than a local device. That’s just not true! I am an incredibly demanding user. I pound on the keyboard at a rapid-fire clip, I keep tons of windows open, alt-tab between them like a ferret, and I am absolutely intolerant of systems that get in my way.  My WorkSpace is fast and responsive and makes me even more productive.

Move to Zero Client
A few months in to my WorkSpaces journey, Steve IM’ed me to talked about his plan to make some Zero Client devices available to members of the pilot program. I liked what he told me and I agreed to participate. He and his sidekick Michael Garza set me up with a Dell Zero Client and two shiny new monitors that had been taking up space under Steve’s desk. At this point my office desktop had no further value to me. I unplugged it, saluted it for its meritorious service, and carried it over to the hardware return shelf in our copy room.  I was now all-in, and totally dependent on, my WorkSpace and my Zero Client.

The Zero Client is a small, quiet device. It has no fans and no internal storage. It simply connects to the local peripherals (displays, keyboard, mouse, speakers, and audio headset) and to the network. It produces little heat and draws far less power than a full desktop.

During this time I was also doing quite a bit of domestic and international travel. I began to log in to my WorkSpace from the road. Once I did this, I realized that I now had something really cool—a single, unified working environment that spanned my office, my home, and my laptop. I had one set of files and one set of apps and I could get to them from any of my devices. I now have a portable desktop that I can get to from just about anywhere.

The fact that I was using a remote WorkSpace instead of local compute power faded in to the background pretty quickly. One morning I sent the team an email with the provocative title “My WorkSpace has Disappeared!” They read it in a panic, only to realize that I had punked them, and that I was simply letting them know that I was able to focus on my work, and not on my WorkSpace. I did report a few bugs to them,  none of which were serious, and all of which were addressed really quickly.

Dead Laptop
The reality of my transition became apparent late last year when the hard drive in my laptop failed one morning. I took it in to our IT helpdesk and they replaced the drive. Then I went back up to my office, reinstalled the WorkSpaces client, and kept on going. I installed no other apps and didn’t copy any files. At this point the only personal items on my laptop are the registration code for the WorkSpace and my stickers! I do still run PowerPoint locally, since you can never know what kind of connectivity will be available at a conference or a corporate presentation.

I also began to notice something else that made WorkSpaces different and better. Because laptops are portable and fragile, we all tend to think of the information stored on them as transient. In the dark recesses of our minds we know that one day something bad will happen and we will lose the laptop and its contents. Moving to WorkSpaces takes this worry away. I know that my files are stored in the cloud and that losing my laptop would be essentially inconsequential.

It Just Works
To borrow a phrase from my colleague James Hamilton, WorkSpaces just works. It looks, feels, and behaves just like a local desktop would.

Like I said before, I am demanding user. I have two big monitors, run lots of productivity apps, and keep far too many browser windows and tabs open. I also do things that have not been a great fit for virtual desktops up until now. For example:

Image Editing – I capture and edit all of the screen shots for this blog (thank you, Snagit).

Audio Editing – I use Audacity to edit the AWS Podcasts. This year I plan to use the new audio-in support to record podcasts on my WorkSpace.

Music – I installed the Amazon Music player and listen to my favorite tunes while blogging.

Video – I watch internal and external videos.

Printing – I always have access to the printers on our corporate network. When I am at home, I also have access to the laser and ink jet printers on my home network.

Because the WorkSpace is running on Amazon’s network, I can download large files without regard to local speed limitations or bandwidth caps. Here’s a representative speed test (via Bandwidth Place):

Sense of Permanence
We transitioned from our pilot WorkSpaces to our production environment late last year and are now provisioning WorkSpaces for many members of the AWS team. My WorkSpace is now my portable desktop.

After having used WorkSpaces for well over a year, I have to report that the biggest difference between it and a local environment isn’t technical. Instead, it simply feels different (and better).  There’s a strong sense of permanence—my WorkSpace is my environment, regardless of where I happen to be. When I log in, my environment is always as I left it. I don’t have to wait for email to sync or patches to install, as I did when I would open up my laptop after it had been off for a week or two.

Now With Tagging
As enterprises continue to evaluate, adopt, and deploy WorkSpaces in large numbers, they have asked us for the ability to track usage for cost allocation purposes. In many cases they would like to see which WorkSpaces are being used by each department and/or project. Today we are launching support for tagging of WorkSpaces. The WorkSpaces administrator can now assign up to 10 tags (key/value pairs) to each WorkSpace using the AWS Management Console, AWS Command Line Interface (CLI), or the WorkSpaces API. Once tagged, the costs are visible in the AWS Cost Allocation Report where they can be sliced and diced as needed for reporting purposes.

Here’s how the WorkSpaces administrator can use the Console to manage the tags for a WorkSpace:

Tags are available today in all Regions where WorkSpaces is available: US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney).

Learning More
If you have found my journey compelling and would like to learn more, here are some resources to get you started:

Request a Demo
If you and your organization could benefit from Amazon WorkSpaces and would like to learn more, please get in touch with our team at workspaces-feedback@amazon.com.

Jeff;