Category: Compute*


AWS Expansion in Oregon – Amazon Redshift and Amazon EC2 High Storage Instances

You can now launch Amazon Redshift clusters and EC2 High Storage instances in the US West (Oregon) Region.

Amazon Redshift
Amazon Redshift is a fully managed data warehouse service that lets you create, run, and manage a petabyte-scale data warehouse with ease. You can easily scale up by adding additional nodes (giving you more storage and more processing power) and you pay a low, hourly fee for each node (Reserved Nodes are also available and bring Amazon Redshift’s price to under $1,000 per terabyte per year, less than 1/10th the cost of most traditional data warehouse systems).

Seattle-based Redfin is ready to use Amazon Redshift in US West (Oregon). Data Scientist Justin Yan told us that:

We took Amazon Redshift for a test run the moment it was released.  It’s fast. It’s easy. Did I mention it’s ridiculously fast? We’ve been waiting for a suitable data warehouse at big data scale, and ladies and gentlemen, it’s here.  We’ll be using it immediately to provide our analysts an alternative to Hadoop. I doubt any of them will want to go back.

Here’s a video that will introduce you to Redshift:

If you’re interested in helping to build and grow Amazon Redshift, we’re hiring in Seattle and Palo Alto drop us a line! Here are some of our open positions:

High Storage Instances
We are also launching the High Storage Eight Extra Large (hs1.8xlarge) instances in the Region. Each of these instances includes 117 GiB of RAM, 16 virtual cores (providing 35 ECU of compute performance), and 48 TB of instance storage across 24 hard disk drives. You can get up to 2.4 GB per second of I/O performance from these drives, making them ideal for data-intensive applications that require high storage density and high sequential I/O.

Localytics of Boston has moved their primary analytics database to the hs1.8xlarge instance type, replacing an array of a dozen RAID 0 volumes. The large storage capacity coupled with the increased performance (especially for sequential reads and writes) makes this instance type an ideal host for their application. According to Mohit Dilawari of Localytics, “We are extremely happy with these instances. Our site is substantially faster after the recent migration, yet our instance cost has decreased.”

If you are interested in helping to design and deliver instance types similar to the hs1.8xlarge to enable our customers to run high performance applications in the cloud, the EC2 team would like to talk to you about this Senior Product Manager position.

— Jeff;

PC2 – The New Punched Card Cloud

At least once per decade (I’ve been at this evangelism thing for a long time), a potential customer will ask me1 if we have ever thought of building a cloud of mainframe computers. They recognize the benefits of cloud computing, but are reluctant to give up their Job Control Language, their decks of punched cards, their flowchart templates, and the incredible sense of job security that comes along with being the world’s oldest COBOL programmer.

In order to address this customer segment, we are launching our new Punched Card Cloud, or PC2 for short.

The Hardware
The new mf (mainframe) instance family contains one member, the mf.medium. Packing a whopping 8 Megabytes of RAM and running at a breathtaking 1 MHz, you’ll happily wait for hours (if not days) for your compute-bound jobs to complete. These instances support the complete IBM 370 instruction set, including perennial favorites like the irreplaceable BALR (Branch and Link Register) and the utterly fabulous M (Multiply).

We are launching with four instances (not types, just four actual instances) in the US East (Northern Virginia) Region. If anyone uses them, we’ll go back to the junk dealer and get more.  

The Software
The PC2 is compatible with all of your favorite mainframe operating systems from the 1960’s, 70’s, and 80’s, but you’ll need to supply your own license. We have MMIs (Mainframe Machine Images) for OS/360, MFT, and VM/370 (my personal favorite).

You can upload, compile, and run the FORTRAN, COBOL, and PL/I programs of your choice on PC2. You can also use the AWS SDKs to call the AWS APIs:

Extended AWS Import/Export
We have extended the popular AWS Import/Export feature to handle the devices and media types common to the mainframe era. You can send card decks (be sure to use at least two rubber bands), 9-track tapes (no write enable ring needed), and the classic RAMAC 305 drives.

We’ll meet the drive at the nearest airport and escort it to the nearest Import/Export facility with white-gloved care.

Cloud Control Language
Your well-honed Job Control Language skills and your dog-eared, coffee-stained yellow card are still of value in the cloud.

The AWS Cloud Control Language gives you the power to create, manage, and destroy cloud resources using line-oriented statements specified on punch cards. Think of it as AWS CloudFormation without the elegance, lower case letters, punctuation, or JSON. 

You can launch a single EC2 instance or create an S3 bucket using less than 100 error-prone lines of CCL. If you make a trivial syntax error, your CCL deck will be rejected (after a delay of several hours), along with an inscrutable error message in your choice of Esperanto or Quechua.

Here’s the start of an example to give you an idea of what can be done with CCL:

And There You Go
The Punched Card Cloud is ready to go today and you can start using it now. Give it a shot and let me know what you think!

— Jeff;

1 – I was once asked for a “cloud of mainframes.” There were witnesses and they are willing to back me up on this.

Amazon Linux AMI 2013.03 Now Available

Max Spevack runs the team that produces the Amazon Linux AMI. Today he’s here to tell you about the newest version of this popular AMI.

— Jeff;


Following our usual six month release cycle, the Amazon Linux AMI 2013.03 is now available.

As always, our goal with the Amazon Linux AMI is to ensure that EC2 customers have a stable, secure, and simple Linux-based AMI that integrates well with other AWS offerings.

Here are some of the highlights of this release of the Amazon Linux AMI:

  • Kernel 3.4.37: We have upgraded the kernel to version 3.4.37 which is part of the long-term stable release 3.4 kernel series. This is a change from the previous releases of the Amazon Linux AMI, which were on the 3.2 kernel series.
  • OpenSSH 6: In response to customer requests, we have moved to OpenSSH 6 for this release of the Amazon Linux AMI. This enables the configuration option of AuthenticationMethods for requiring multi-factor authentication.
  • OpenSSL 1.0.1: Also based on customer requests, we have added OpenSSL 1.0.1 to this release while retaining compatibility with OpenSSL 1.0.0.
  • New AWS command line tools: We are excited to include the Developer Preview of the new AWS Command Line Interface. The aws-cli tool is written in Python, and provides a one-stop-shop for controlling multiple AWS services through the command line.
  • New and updated packages: We have included a number of new packages based on customer requests as well as updates to existing packages. Please see our Amazon Linux AMI 2013.03 release notes for more information.

The Amazon Linux AMI 2013.03 is available for launch in all regions. Users of 2012.09, 2012.03, and 2011.09 versions of the Amazon Linux AMI can easily upgrade using yum.

The Amazon Linux AMI is a rolling release, configured to deliver a continuous flow of updates that allow you to roll from one version of the Amazon Linux AMI to the next. In other words, Amazon Linux AMIs are treated as snapshots in time, with a repository and update structure that gives you the latest packages that we have built and pushed into the repository. If you prefer to lock your Amazon Linux AMI instances to a particular version, please see the Amazon Linux AMI FAQ for instructions.

As always, if you need any help with the Amazon Linux AMI, dont hesitate to post on the EC2 forum, and someone from the team will be happy to assist you.

— Max

PS – Help us to build the Amazon Linux AMI! We are actively hiring Linux Systems Engineer, Linux Software Development Engineer, and Linux Kernel Engineer positions:

Additional EBS-Optimized Instance Types for Amazon EC2

We are adding EBS-optimized support to four additional EC2 instance types. You can now request dedicated throughput between EC2 instances and your EBS (Elastic Block Store) volumes at launch time:

Instance Type
Dedicated Throughput
m1.large 500 Mbps
 m1.xlarge 1000 Mbps
 m2.2xlarge (new) 500 Mbps
 m2.4xlarge 1000 Mbps
 m3.xlarge (new) 500 Mbps
 m3.2xlarge (new) 1000 Mbps
 c1.xlarge (new) 1000 Mbps

The dedicated network throughput that you get when you request EBS-optimized support will make volume performance more predictable and more consistent. You can use them with Standard and Provisioned IOPS EBS volumes as your needs dictate.

With this change you now have additional choices to make when you design and build your AWS-powered applications. You could, for example, use Standard volumes for reference and low-volume log files, stepping up to Provisioned IOPS volumes for more demanding, I/O intensive loads such as databases. I would also advise you to spend a few minutes studying the implications of the EBS pricing models for the two volume types. When you use Standard volumes you pay a modest charge for each I/O request ($0.10 for 1 million requests in the US East Region, for example). However, when you use Provisioned IOPS, there is no charge for individual requests. Instead, you pay a monthly fee based on the number of IOPS provisioned for the volume (e.g. $0.10 per provisioned IOPS per month, again in US East). The AWS Calculator has been updated and you can use it to explore your options and the related costs.

EBS-optimized support is now available in the US East (Northern Virginia), US West (Northern California) US West (Oregon), EU West (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (So Paulo) Regions.

— Jeff;

 

Cross Region EC2 AMI Copy

We know that you want to build applications that span AWS Regions and we’re working to provide you with the services and features needed to do so. We started out by launching the EBS Snapshot Copy feature late last year. This feature gave you the ability to copy a snapshot from Region to Region with just a couple of clicks. In addition, last month we made a significant reduction (26% to 83%) in the cost of transferring data between AWS Regions, making it less expensive to operate in more than one AWS region.

Today we are introducing a new feature: Amazon Machine Image (AMI) Copy. AMI Copy enables you to easily copy your Amazon Machine Images between AWS Regions. AMI Copy helps enable several key scenarios including:

  • Simple and Consistent Multi-Region Deployment – You can copy an AMI from one region to another, enabling you to easily launch consistent instances based on the same AMI into different regions.
  • Scalability – You can more easily design and build world-scale applications that meet the needs of your users, regardless of their location.
  • Performance – You can increase performance by distributing your application and locating critical components of your application in closer proximity to your users. You can also take advantage of region-specific features such as instance types or other AWS services.
  • Even Higher Availability – You can design and deploy applications across AWS regions, to increase availability.

Console Tour
You can initiate copies from the AWS Management Console , the command line tools, the EC2 API or the AWS SDKs. Let’s walk through the process of copying an AMI using the Console.

From the AMIs view of the AWS Management console select the AMI and click on Copy:

Choose the Copy AMI operation and the Console will ask you where you would like to copy the AMI:

After you have made your selections and started the copy you will be provided with the ID of the new AMI in the destination region:

Once the new AMI is in an Available state the copy is complete.

A Few Important Notes
Here are a few things to keep in mind about this new feature:

  • You can copy any AMI that you own, EBS or instance store (S3) backed, and with any operating system.
  • The copying process doesn’t copy permissions or tags so you’ll need to make other arrangements to bring them over to the destination region.
  • The copy process will result in a separate and new AMI in the destination region which will have a unique AMI ID.
  • The copy process will update the Amazon RAM Image and Amazon Kernel Image references to point to equivalent images in the destination region.
  • You can copy the same AMI to multiple regions simultaneously.
  • The console-based interface is push-based; you log in to the source region and select where you’d like the AMI to end up. The API and the command line are, by contrast, are pull-based and you must run them against the destination region and specify the source region.

— Jeff;

AWS Elastic Beanstalk for Node.js

Im happy to be able to tell you that AWS Elastic Beanstalk now supports Node.js applications. You can now build event-driven Node.js applications and then use Elastic Beanstalk to deploy and manage them on AWS.

Elastic Beanstalk automatically configures the environment and resources using sensible defaults in order to run your Node.js application. You can focus on writing your application and let Elastic Beanstalk run it and scale it automatically.

Elastic Beanstalk for Node.js includes a bunch of features that are specific to the Node.js environment. Here are some of my favorites:

  • Choose Nginx or Apache as the reverse proxy to your Node.js application. You can even choose to not use any proxy if your application requires that the client establishes a direct connection.
  • Configure HTTP and TCP load balancing depending on what your application needs. If your application uses WebSockets, then TCP load balancing might be more appropriate for your workload.
  • Configure the Node.js stack by using the specific version of Node.js that your application needs or by providing the command that is used to launch your Node.js application. You can also manage dependencies using npm.
  • Help improve performance by configuring gzip compression and static files when using Nginx or Apache. With gzip compression, you can reduce the size of your response to the client to help create faster transfer speeds. With static files, you can let Nginx or Apache quickly serve your static assets (such as images or CSS) without having these requests take time away from the data-intensive processing that your Node.js application might be performing.
  • Seamlessly integrate your app with Amazon RDS to store and retrieve data from a relational data store.
  • Customize your EC2 instances or connect your app to AWS resources using Elastic Beanstalk configuration files (visit the AWS Elastic Beanstalk Developer Guide to learn more about configuration files).
  • Run your Node.js application inside an Amazon Virtual Private Cloud for additional networking control.

To get started, simply create a new Elastic Beanstalk application and select the Node platform:

You can configure all of the options for your Node.js environment from within Elastic Beanstalk:

To learn more about Elastic Beanstalk for Node.js, visit the AWS Elastic Beanstalk Developer Guide. The documentation also includes step-by-step guides for using the Express and Geddy frameworks with Elastic Beanstalk.

— Jeff;

 

Amazon EC2 Update – Virtual Private Clouds for Everyone!

If you use or plan to use Amazon EC2, you need to read this post!

History Lesson
Way back in the beginning (OK, 2006) we launched Amazon EC2. When you launched an instance we gave it a public IP address and a DNS hostname. You had the ability to create a Security Group for ingress filtering and attach it to the instance at launch time. Each instance could have just one public IP address, which could be an Elastic IP if desired. Later, we added private IP addresses and an internal DNS hostname to each instance. Let’s call this platform “EC2-Classic” (that will be on the quiz, so remember it).

In 2009 we introduced the Amazon Virtual Private Cloud, better known as the VPC. The VPC lets you create a virtual network of logically isolated EC2 instances and an optional VPN connection to your own datacenter.

In 2011 we announced a big upgrade to EC2’s networking features, with enhanced Security Groups (ingress and egress filtering and the ability to change membership on running instances), direct Internet connectivity, routing tables, and network ACLs to control the flow of traffic between subnets.

We also added lots of other features to VPC in the past couple of years including multiple IP addresses, multiple network interfaces, dedicated instances, and statically routed VPN connections.

How to Make it Easier to Get the Power of VPC and the Simplicity of EC2
We want every EC2 user to be able to benefit from the advanced networking and other features of Amazon VPC that I outlined above. To enable this, starting soon, instances for new AWS customers (and existing customers launching in new Regions) will be launched into the “EC2-VPC” platform. We are currently in the process of enabling this feature, one Region at a time, starting with the Asia Pacific (Sydney) and South America (So Paulo) Regions. We expect these roll-outs to occur over the next several weeks. We will update this post in the EC2 forum each time we enable a Region.

You dont need to create a VPC beforehand – simply launch EC2 instances or provision Elastic Load Balancers, RDS databases, or ElastiCache clusters like you would in EC2-Classic and well create a VPC for you at no extra charge.  Well launch your resources into that VPC and assign each EC2 instance a public IP address. You can then start taking advantage of the features I mentioned earlier: assigning multiple IP addresses to an instance, changing security group membership on the fly, and adding egress filters to your security groups. These VPC features will be ready for you to use, but you need not do anything new and different until you decide to do so.

We refer to the automatically provisioned VPCs as default VPCs.  They are designed to be compatible with your existing shell scripts, CloudFormation templates, AWS Elastic Beanstalk applications, and Auto Scaling configurations. You shouldnt need to modify your code because youre launching into a default VPC.

Default VPCs for (Almost) Everyone
The default VPC features are available to new AWS customers and existing customers launching instances in a Region for the first time. If youve previously launched an EC2 instance in a Region or provisioned ELB, RDS, or ElastiCache in a Region, we wont create a default VPC for you in that Region.

If you are an existing AWS customer and you want to start gaining experience with this new behavior, you have two options. You can create a new AWS account or you can pick a Region that you haven’t used (as defined above). You can see the set of available platforms in the AWS Management Console (this information is also available through the EC2 APIs and from the command line). Be sure to check the Supported Platforms and Default VPC values for your account to see how your account is configured in a specific Region.

You can determine if your account is configured for default VPC within a particular Region by glancing at the top right corner of the EC2 Dashboard in the AWS Management Console. Look for the Supported Platforms item.  EC2-VPC means your instances will be launched into Amazon VPC.

Here is what you will see if your AWS account is configured for EC2 Classic and EC2-VPC (without a default VPC):

You can also see the supported platforms and the default VPC values using the EC2 API and the Command Line tools.

All Set Up
As I noted earlier in this post, we’ll create a default VPC for you when you perform certain actions in a Region. It will have the following components:

  • One default subnet per Availability Zone.
  • A default route table, preconfigured to send traffic from the default subnets to the Internet.
  • An Internet Gateway to allow traffic to flow to and from the Internet.

Each VPC will have its own private IP range (172.31.0.0/16 to be precise); each subnet will be a “/20” (4096 IP addresses, minus a few that are reserved for the VPC).

EC2 instances created in the default VPC will also receive a public IP address (this turns out to be a very sensible default given the preconfigured default route table and Internet Gateway). This is a change from the existing VPC behavior, and is specified by a new PublicIP attribute on each subnet. We made this change so that we can support the EC2-Classic behavior in EC2-VPC. The PublicIP attribute can’t currently be set for existing subnets but we’ll consider allowing this in the future (let us know if you think that you would find this to be of use).

You can modify your default VPC as you see fit (e.g., creating or deleting subnets, creating or modifying route tables, adding VPN connections, etc.)  You can also create additional, nondefault VPCs just like the VPCs you can create today.

Once you are up and running in a VPC within an AWS Region, you’ll have access to all of the AWS services and instance types that are available in that Region (see the List of AWS Offerings by Region page for more information). This includes new and important services such as Amazon Redshift, AWS OpsWorks, and AWS Elastic Beanstalk.

New VPC Features
We’re also adding new features to VPC. These are available to all VPC users:

DNS Hostnames – All instances launched in a default VPC will have private and public DNS hostnames. DNS hostnames are disabled for existing VPCs, but can be enabled as needed.  If youre resolving a public hostname for another instance in the same VPC, it will resolve to the private IP of the target instance.  If youre resolving a public hostname for an instance outside of your VPC, it will resolve to the public IP address of that instance.

DNS Name Resolution – DNS resolution is enabled in all VPCs but weve added the ability for you to disable use of the Amazon provided DNS service in your VPC as needed.

ElastiCache – You can now create ElastiCache cache nodes within VPC (both default and nondefault VPCs).

RDS IP Addresses – RDS database instances in VPC can now be provisioned as Internet-facing or private-facing instances. Internet-facing RDS database instances have public IP addresses so that they can be accessed from EC2 Classic instances or on-premises systems. For more information on this feature, read the documentation on Amazon RDS and the Amazon Virtual Private Cloud Service.

Learning About VPC
To learn more about Amazon VPC, please consult the following resources:

Were happy to give you the advanced features of Amazon VPC coupled with the simplicity of Amazon EC2. If you have any questions, the AWS Support Team is ready, willing, and able to help.

— Jeff;

 

Modulus – Scalable Hosting of Node.js Apps With MongoDB Support

Charlie from Modulus wrote in to tell me that their new platform is now available, and that it runs on AWS.

Modulus was designed to let you build and host Node.js applications in a scalable fashion. Node.js is a server-side JavaScript environment built around an event-driven, non-blocking I/O model. Web applications built on  Node.js are able to handle a multitude of simultaneous client connections (e.g. HTTP requests) without forcing you to use threads or other explicit concurrency management models. Instead, Node.js uses a lightweight callback model which minimizes the per-connection memory and processing overhead.

Your Modulus application runs on one or more mini-servers which they call Servos. You can add or remove Servos from your application as needed as your load and your needs change; Modulus will automatically load balance traffic across your Servos, with built-in session affinity to make things run even more efficiently. Each Servo is allocated 396 MB of memory and 512 MB of swap space.

Your application can use MongoDB for database storage. Modulus includes a set of administrative and user management tools and also supports data export, all through their web portal. The triple-redundant database storage scales without bounds and you won’t need to take your application offline to scale up. Similarly, your application has access to a disk-based file system that is accessible from all of your Servos and scales as needed to accomodate your data.

All requests coming in to Modulus are tracked, stored, and made available for analysis so that you can locate bottlenecks and boost the efficiency and perfomance of your application. You can also access application log files.

You can communicate with code running in your user’s browser by using the built-in WebSocket support. You application can also make use of other AWS services such as DynamoDB, SQS, and SNS.

You can host the application at the domain name of your choice, you can create and use your own SSL certificates, and you can define a set of environment variables that are made available to your code.

Modulus is ready to use now and you can sign up here to start out with a $25 usage credit. You can also use the Modulus pricing calculator to get a better idea of how much it will cost to host and run your application.

This looks really cool and I think it has a bright future. Give it a shot and let me know what you think!

— Jeff;

 

Reserved Instance Price Reduction for Amazon EC2

The AWS team is always exploring ways to reduce costs and to pass the savings along to our customers. We’re more than happy to continue this tradition with our latest price reduction.

Starting today, we are reducing prices for new EC2 Reserved Instances running Linux/UNIX, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server by up to 27%. This reduction applies to the Standard (m1), Second-Generation Standard (m3), High-Memory (m2), and High-CPU (c1) instance families. As always, if you reserve more, you will save more. To be more specific, you will automatically receive additional savings when you have more than $250,000 in active upfront Reserved Instance fees.

With this price reduction, Reserved Instances will provide savings of up to 65% in comparison to On-Demand instances. Here are the price decreases by instance family and Region:

  Price Decrease (%)
Region m1 m2 m3 c1
 US East (Northern Virginia) 13.0% 23.2% 13.2% 10.1%
 US West (Northern California) 13.3% 27.7% 13.3% 10.0%
 US West (Oregon) 13.0% 23.2% 13.2% 10.1%
 AWS GovCloud (US) 0.6% 13.9% 1.1% 2.1%
 Europe (Ireland) 13.3% 27.7% 13.5% 10.0%
 Asia Pacific (Singapore) 4.9% 19.8% 4.9% 2.4%
 Asia Pacific (Tokyo) 4.9% 20.8% 5.0% 2.2%
 Asia Pacific (Sydney) 4.9% 19.8% 4.9% 2.4%
 South America (So Paulo) 4.9% 21.1% 4.9% 0.0%

These new prices apply to all three Reserved Instance models (Light,  Medium, and Heavy Utilization) for purchases made on or after March 5, 2013.

We recommend that you review your usage once a month to determine if you should alter your Reserved Instance footprint by buying additional Reserved Instances or selling them on the AWS Reserved Instance Marketplace. However, if you havent done it lately, now is the perfect opportunity to review your existing usage and determine if now is the right time to purchase new Reserved Instances. Here are some general guidelines to help you choose the most economical model:

  • If your server is running less than 15% of the time, use an On-Demand instance.
  • If your server is runninng 15% and 40% of the time, use a Light Utlization Reserved Instance.
  • If your server is running 40% to 80% of the time, use a Medium Utilization Reserved Instance.
  • If your server is running 80% to 100% of the time, use a Heavy Utilization Reserved Instance.

For more information on making the choice that is right for you, see my blog post on Additional Reserved Instance Options for Amazon EC2.

During the month of March, you can take advantage of a free trial of AWS Trusted Advisor to generate a personalized report on how you can optimize your bill by taking advantage of the new, lower Reserved Instance prices. For more information about Trusted Advisor, see my post about the AWS Trusted Advisor Update + Free Trial.

To learn more about this feature and other Amazon EC2 pricing options, please visit the Amazon EC2 Pricing and the Amazon EC2 Reserved Instance Page.

Jeff;

Available Now: Beta release of AWS Diagnostics for Microsoft Windows Server

Over the past few years, we have seen tremendous adoption of Microsoft Windows Server in AWS.

Customers such as the Department of Treasury, the United States Tennis Association, and Lionsgate Film and Entertainment are building and running interesting Windows Server solutions on AWS. To further our efforts to make AWS the best place to run Windows Server and Windows Server workloads, we are happy to announce today the beta release of AWS Diagnostics for Microsoft Windows Server.

AWS Diagnostics for Microsoft Windows Server addresses a common customer request to make the intersection between AWS and Windows Server easier for customers to analyze and troubleshoot.  For example, customers may have one setting for their AWS security groups that allows access to certain Windows Server applications, but inside of their Windows Server instances, the built-in Windows firewall may deny that access. Rather than having the customer track down the cause of the issue, the diagnostics tool will collect and understand the relevant information from Windows Server and AWS, and suggest troubleshooting and fixes to the customer.

The diagnostics tool can work on running Windows Server instances. You can also attach your Windows Server EBS volumes to an existing instance and the diagnostics tool will collect the relevant logs for troubleshooting Windows Server from the EBS volume.  In the end, we want to help customers spend more time using, rather than troubleshooting, their deployments.

To use the diagnostics tool, please visit http://aws.amazon.com/windows/awsdiagnostics. There you will find more information about the feature set and documentation about how to use the diagnostics tool.

As this is a beta release, please provide feedback on how we can make this tool more useful for you. You can fill out a survey here.

To help get you started, we have created a short video that shows the tool in action troubleshooting a Windows Server instance running in AWS.

Shankar Sivadasan, Senior Product Manager