Category: Compute*


C3 Instance Update

As you know, we launched our new compute optimized instance family (C3) a few weeks ago, and wow, are we seeing unprecedented demand across all sizes and all Regions!  As one of our product managers just told me, these instances are simply “fast in every dimension.” They have a high performance CPU, matched with SSD-based instance storage and EC2’s new enhanced networking capabilities, all at a very affordable price.

We believed that this instance type would be popular, but would not have imagined just how popular they’ve been.  The EC2 team took a look back and found that growth in C3 usage to date has been higher than they have seen for any other newly introduced instance type. We’re not talking about some small percentage difference here. It took just two weeks for C3 usage to exceed the level that the former fastest-growing instance type achieved in twenty-two weeks!  This is why some of you are not getting the C3 capacity you’re asking for when you request it.

In the face of this growth, we have enlarged, accelerated, and expedited our orders for additional capacity across all Regions.  We are working non-stop to get it in-house, and hope to be back to more normal levels of capacity in the next couple of weeks.

— Jeff;

Background Task Handling for AWS Elastic Beanstalk

My colleague Abhishek Singh sent along a guest post to introduce a really important new feature for AWS Elastic Beanstalk.

— Jeff;


You can now launch Worker Tier environments in Elastic Beanstalk.

These environments are optimized to process application background tasks at any scale. Worker tiers complement the existing web tiers and are ideal for time consuming tasks such as report generation, database cleanup, and email notification.

For example, to send confirmation emails from your application, you can now simply queue a task to later send the email while your application code immediately proceeds to render the rest of your webpage. A worker tier in your environment will later pick up the task and send the email in the background.

A worker is simply another HTTP request handler that Beanstalk invokes with messages buffered using the Amazon Simple Queue Service (SQS). Elastic Beanstalk takes care of creating and managing the queue if one isnt provided. Messages put in the queue are forwarded via HTTP POST to a configurable URL on the local host. You can develop your worker code using any language supported by Elastic Beanstalk in a Linux environment: PHP, Python, Ruby, Java, or Node.js.

You can create a single instance or a load balanced and auto-scaled worker tier that will scale based on the work load. With worker tiers, you can focus on writing the actual code that does the work. You don’t have to learn any new APIs and you don’t have to manage any servers. For more information, read our new documentation on the Environment Tiers.

Use Case – Sending Confirmation Emails
Imagine, youre a startup with a game changing idea or product and youd like to gauge customer interest.

You create a simple web application that will allow potential customers to register their email address to be notified of updates. As with most businesses, you decide that once the customer has provided their email address you will send them a confirmation email informing them that their registration was successful. By using an Elastic Beanstalk worker tier to validate the email address and to generate and send the confirmation email, you can make your front-end application non-blocking and provide customers with a more responsive user experience.

The remainder of the post will walk you through creating a worker tier and deploying a sample Python worker application that can be used to send emails asynchronously. If you do not have a frontend application, you can download a Python based front-end application from the AWS Application Management Blog.

We’ll use the Amazon Simple Email Service (SES). Begin by adding a verified sender email address as follows:

  1. Log in to the SES Management Console and select Email Addresses from the left navigation bar.
  2. Click on Verify a New Email Address.
  3. Type in the email address you want to use to send emails and click Verify This Email Address. You will receive an email at the email address provided with a link to verify the email address. Once you have verified the email address, you can use the email address as the SOURCE_EMAIL_ADDRESS.

 Next, download and customize the sample worker tier application:

  1. Download the worker tier sample application source bundle and extract the files into a folder on your desktop.
  2. Browse to the folder and edit the line that reads SOURCE_EMAIL_ADDRESS = ‘nobody@amazon.com in the default_config.py file so that it refers to the verified sender email address, then save the file.
  3. Select all of the files in the folder and add them to a zip archive. For more details on creating a source bundle to upload to Elastic Beanstalk, please read Creating an Application Source Bundle.

Now you need to create an IAM Role for the worker role. Here’s what you need to do:

  1. Log in to the IAM Management Console and select Roles on the left navigation bar.
  2. Click the Create New Role button to create a new role.
  3. Type in WorkerTierRole for the Role Name.
  4. Select AWS Service Roles and select Amazon EC2.
  5. Select Custom Policy and click Select.
  6. Type in WorkerTierRole for the Policy Name, paste the following snippet as the Policy Document, and click Continue:
    {    "Version": "2012-10-17",    "Statement": [      {        "Effect": "Allow",        "Action":   [ "ses:SendEmail" ],        "Resource": [ "*" ]      },      {        "Effect": "Allow",        "Action":   [ "sqs:*" ],        "Resource": [ "*" ]      },      {        "Effect": "Allow",        "Action":   [ "cloudwatch:PutMetricData" ],        "Resource": [ "*" ]      }    ]  }  

  7. Click Create Role to create the role.

You are now ready to create the Elastic Beanstalk application which will host the worker tier. Follow these steps:

  1. Log in to the AWS Elastic Beanstalk Web Console and click on Create New Application
  2. Enter the application name and description and click Create
  3. Select Worker for the Environment tier drop down, Python for the Predefined configuration and Single instance for Environment type. Click Continue.
  4. Select Upload your own and Browse to the source bundle you created previously.
  5. Enter the environment name and description and click Continue.
  6. On the Additional Resources page, leave all options unselected and click Continue.
  7. On the Configuration Details page, select the WorkerTierRole that you created earlier from the Instance profile drop down and click Continue.
  8. On the Worker Details page, modify the HTTP path to /customer-registered and click Continue.
  9. Review the configuration and click Create.

Once the environment is created and its health is reported as Green, click on View Queue to bring up the SQS Management Console:

Then Click Queue Actions and select Send a Message.

Type in messages in the following format; click Send Message to send a confirmation email:

{    "name" : "John Smith",    "email" : "john@example.com"  }  

This new feature is available now and you can start using it today.

— Abhishek Singh, Senior Product Manager

 

AWS Management Console – Auto Scaling Support

Amazon EC2’s Auto Scaling feature gives you the power to build systems that adapt to a workload that varies over time. You can scale out to meet peak demand, and then scale in later to minimize costs.

Today we are adding Auto Scaling support to the AWS Management Console. You can now create launch configurations and Auto Scaling groups with point-and-click ease, and you can bid for Spot Instances when scaling out. You can also initiate scaling operations from the console and you can manage the associated notifications.

Let’s take a tour of the console’s new support for Auto Scaling. The welcome page outlines the benefits and the major steps:

The launch configuration specifies the Amazon Machine Image (AMI), EC2 instance type, EBS storage, security group, and other details needed to launch new instances as part of the scale-up process. The console leads you through the necessary steps, beginning with the selection of the desired AMI:

With the AMI chosen, your next task is to choose the EC2 instance type that will be launched when scaling out:

Then you provide a name for your launch configuration, along with an IAM role, enable CloudWatch detailed monitoring, and request EBS-optimized instances. You can even choose a purchasing option (On- Demand or Spot).

If you decide to use Spot Instances, the console will show you the current price for the selected instance type in each Availability Zone. You can use this information to help you make an informed choice when you enter the maximum price that you want to pay to launch a Spot instance:

You can also request the creation of new EBS disk storage volumes as part of the launch. These volumes can be deleted on termination, or they can be left around. The first option is perfect if you use the EBS volumes for temporary storage; the second would be appropriate if you generate log files on the instance and need to move them to long-term storage after the instance has been terminated.

You can choose to attach an existing Security Group to all newly launched instances, or you can create and customize a new one.

With all of the details specified, now is the time to review them and to create the launch configuration:

As you probably know, the launch configuration provides Auto Scaling with all of the information needed to launch and terrminate EC2 instances as part of scaling operations, but it doesn’t actually launch any instances. To do that you need to create an Auto Scaling group. Click the following button to do this:

The console will lead you through the steps needed to create your Auto Scaling group. You can set the initial size (number of EC2 instances) of the group, along with the desired minimum and maximum size. You can also choose to launch the instances into a particular Virtual Private Cloud (VPC), and you can select the desired Availability Zones.

If you are using the instances to handle incoming HTTP traffic, you can also choose to associate the Auto Scaling group with an Elastic Load Balancer:

The next step is optional. If you are simply using the Auto Scaling group to ensure that a particular number of instances are up and running, you can skip it. If you want the group to vary in size in response to a changing load or to other factors, then you need to set up scaling policies.

Groups that vary in size must have a Scale Out policy and a Scale In policy. These policies are triggered by Amazon CloudWatch alarms. For example, you can activate the policies when the average CPU load (across the Auto Scaling group) rises above or drops below certain thresholds. Or, you can activate them in response to changes in the amount of network traffic to or from the instances in the group. You can even create custom CloudWatch metrics such as “Requests Per Second” and use them to initiate scaling operations.

As you can see, you can choose the actions to be taken, along with the associated quantities (number of EC2 instances) for the scale out and scale in activities:

Each Auto Scaling activity generates an Amazon SNS notification; you can route these to an existing topic or you can create a new topic and subscribe it to one or more email addresses from the console:

After you create the Auto Scaling group, you can watch the scaling history using the console

You can also initiate scale out and scale in operations

This new feature is available in all of the public AWS Regions and you can start using it today. Give it a try, and let me know what you think.

— Jeff;

Amazon EC2 Resource-Level Permissions for RunInstances

Derek Lyon sent me a really nice guest post to introduce an important new EC2 feature!

— Jeff;


I am happy to announce that Amazon EC2 now supports resource-level permissions for the RunInstances API. This release enables you to set fine-grained controls over the AMIs, Snapshots, Subnets, and other resources that can be used when creating instances and the types of instances and volumes that users can create when using the RunInstances API.

This release is part of a larger series of releases enabling resource-level permissions for Amazon EC2, so lets start by taking a step back and looking at some of the features that we already support.

EC2 Resource-Level Permission So Far
In July, we announced the availability of Resource-level Permissions for Amazon EC2. Using the initial set of APIs along with resource-level permissions, you could control which users are allowed to do things like start, stop, reboot, and terminate specific instances, or attach, detach or delete specific volumes.

Since then, we have continued to add support for additional APIs, bringing the total up to 19 EC2 APIs that currently support resource-level permissions, prior to today’s release. The additional functionality that we have added allows you to control things like which users can modify or delete specific Security Groups, Route Tables, Network ACLs, Internet Gateways, Customer Gateways, or DHCP Options Sets.

We also provided the ability to set permissions based on the tags associated with resources. This in turn enabled you to construct policies that would, for example, allow a user the ability to modify resources with the tag environment=development on them, but not resources with the tag environment=production on them.

We have also provided a series of debugging tools, which enable you to test policies by making DryRun API calls and to view additional information about authorization errors using a new STS API, DecodeAuthorizationMessage.

Resource-level Permissions for RunInstances
Using EC2 Resource-level Permissions for RunInstances, you now have the ability to control both which resources can be referenced and used by a call to RunInstances, and which resources can be created as part of a call to RunInstances. This enables you to control the use of the following types of items:

  • The AMI used to run the instance
  • The Subnet and VPC where the instance will be located
  • The Availability Zone and Region where the instance and other resources will be created
  • Any Snapshots used to create additional volumes
  • The types of instances that can be created
  • The types and sizes of any EBS volumes created

You can now use resource-level permissions to limit which AMIs a user is permitted to use when running instances. In most cases, you will want to start by tagging the AMIs that you want to whitelist for your users with an appropriate tag, such as whitelist=true. (As part of the whitelisting process, you will also want to limit which users have permission to the tagging APIs, otherwise the user can add or remove this tag.) Next, you can construct an IAM policy for the user that only allows them to use an AMI for running instances if it has your whitelist tag on it. This policy might look like this:

{      "Version": "2012-10-17",      "Statement": [{         "Effect": "Allow",         "Action": "ec2:RunInstances",         "Resource": [            "arn:aws:ec2:region::image/ami-*"              ],         "Condition": {           "StringEquals": {             "ec2:ResourceTag/whitelist": "true"           }         }       },            {         "Effect": "Allow",         "Action": "ec2:RunInstances",         "Resource": [            "arn:aws:ec2:region:account:instance/*",           "arn:aws:ec2:region:account:security-group/sg-1a2b3c4d"         ]       }     ]  }

Or, if you want to grant a user the ability to run instances in a certain subnet, you can do this with a policy that looks like this:

{      "Version": "2012-10-17",      "Statement": [{         "Effect": "Allow",         "Action": "ec2:RunInstances",         "Resource": [            "arn:aws:ec2:region:account:subnet/subnet-1a2b3c4d"         ]       },            {         "Effect": "Allow",         "Action": "ec2:RunInstances",         "Resource": [           "arn:aws:ec2:region:account:instance/*",           "arn:aws:ec2:region:account:image/*",           "arn:aws:ec2:region:account:security-group/sg-1a2b3c4d"         ]       }     ]  }

If you want to set truly fine-grained permissions, you can construct policies that combine these elements. This enables you to set fine-grained policies that do things like allow a user to run only m3.xlarge instances in a certain Subnet (i.e. subnet-1a2b3c4d), using a particular Image (i.e. ami-5a6b7c8d) and a certain Security Group (i.e. sg-11a22b33). The applications for these types of policies are far-reaching and we are excited to see what you do with them.

Because permissions are applied at the API level, any users that the IAM policy is applied to will be restricted by the policy you set, including users who run instances using the AWS Management Console, the AWS CLI, or AWS SDKs.

You can find a complete list of the resource types that you can write policies for in the Permissions section of the EC2 API Reference. You can also find a series of sample policies and use cases in the IAM Policies section of the EC2 User Guide.

— Derek Lyon, Principal Product Manager

 

A New Generation of EC2 Instances for Compute-Intensive Workloads

Many AWS customers run CPU-bound, compute-intensive workloads on Amazon EC2, often using parallel processing frameworks such as Hadoop to distribute work and collect results. This includes batch data processing, analytics, high-performance scientific computing, 3D rendering, engineering, and simulation.

To date these needs have been met by the existing members of our compute-optimized instance families — the C1 and CC2 instance types. When compared to EC2’s general purpose instance types, the instances in this family have a higher ratio of compute power to memory.

Hello C3
Today we are introducing the C3 family of droids instances. Compared to C1 instances, the C3 instances provide faster processors, approximately double the memory per vCPU and SSD-based instance storage.

As the newest member of our lineup of compute-optimized instances, the C3’s were designed to deliver high performance at an economical price. The C3 instances feature per-core performance that bests that provided by any of the other EC2 instance types, at a price-performance ratio that will make them a great fit for many compute-intensive workloads.

Use the Cores
Each virtual core (vCPU) on a C3 instance type is a hardware Hyper-Thread on a 2.8 GHz Intel Xeon E5-2680v2 (Ivy Bridge) processor. There are five members of the C3 family:

Instance Name vCPU Count Total ECU RAM Local Storage Hourly On-Demand
c3.large 2 7 3.75 GiB 2 x 16 GB SSD $0.15
c3.xlarge 4 14 7 GiB 2 x 40 GB SSD $0.30
c3.2xlarge 8 28 15 GiB 2 x 80 GB SSD $0.60
c3.4xlarge 16 55 30 GiB 2 x 160 GB SSD $1.20
c3.8xlarge 32 108 60 GiB 2 x 320 GB SSD $2.40

Prices are for Linux instances in US East (Northern Virginia).

Protocols
If you launch C3 instances inside of a Virtual Private Cloud and you use an HVM AMI with the proper driver installed, you will also get the benefit of EC2’s new enhanced networking. You will see significantly higher performance (in terms of packets per second), much lower latency, and lower jitter.

Update: Read the documentation on Enabling Enhanced Networking on Linux Instances in a VPC to learn how to do this.

Getting Technical
As you may have noticed, we are specifying the underlying processor type for new instance types. Armed with this information, you can choose to make use of specialized instructions or to tune your application to exploit other characteristics (e.g. cache behavior) of the actual processor. For example, the processor in the C3 instances supports Intel’s AVX (Advanced Vector Extensions) for efficient processing of vector-oriented data in 256-bit chunks.

Some Numbers
In order to measure the real-world performance of the new C3 instances, we launched a 26,496 core cluster and evaluated it against the most recent Top500 scores. This cluster delivered an Rmax of 484.18 teraflops and would land at position 56 in the June 2013 list. Notably, this is over twice the performance of the last cluster that we submitted to Top500. We also built an 8,192 cluster, which delivered an Rmax of 163.9, putting it at position 210 on the Top500 list.

Launch One Now
The C3 instances are available today in the US East (Northern Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney) Regions. You can choose to launch C3 instances as On-Demand, Reserved Instances, or Spot Instances.

— Jeff;

Coming Soon – The I2 Instance Type – High I/O Performance Via SSD

Earlier today, Amazon.com CTO Werner Vogels announced the upcoming I2 instance type from the main stage of AWS re:Invent!

The I2 instances are optimized for high performance random I/O. They are a great fit for transactional systems and NoSQL databases like Cassandra and MongoDB.

The instances use 2.5 GHz intel Xeon E5-2670v2 processors with Turbo mode enabled. They also benefit EC2’s new enhanced networking. You will see significantly higher performance (in terms of packets per second), much lower latency, and lower jitter when you launch these instances from within a Virtual Private Cloud (VPC).

We’ll be releasing more information at launch time. Here are the preliminary specs to tide you over until then:

Instance Name vCPU Count RAM
Instance Storage (SSD)
i2.large 2 15 GiB 1 x 360 GB
i2.xlarge 4 30.5 GiB 1 x 720 GB
i2.2xlarge 8 61 GiB 2 x 720 GB
i2.4xlarge 16 122 GiB 4 x 720 GB
i2.8xlarge 32 244 GiB 8 x 720 GB

The i2.8xlarge instances will be able to deliver 350,000 random read IOPS and 320,000 random write IOPS. Numbers for the other instance types will be proportionally smaller, based on the number of SSD devices associated with the instance.

Stay tuned for more information about the I2 instances.

— Jeff;

Amazon WorkSpaces – Desktop Computing in the Cloud

Once upon a time, enterprises had a straightforward way to give each employee access to a desktop computer. New employees would join the organization and receive a standard-issue desktop, preconfigured with a common set of tools and applications. This one-size-fits all model was acceptable in the early days of personal computing, but not anymore.

Enterprise IT has been engaged in a balancing act in order to meet the needs of a diverse and enlightened user base. They must protect proprietary corporate data while giving employees the ability to work whenever and wherever they want, while using the desktop or mobile device of their choice.

Our new Amazon WorkSpaces product gives Enterprise IT the power to meet this challenge head-on. You, the IT professional, can now provision a desktop computing experience in the cloud for your users. Your users can access the applications, documents, and intranet resources that they need to get their job done, all from the comfort of their desktop computer, laptop, iPad, or Android tablet.

Let’s take a look at the WorkSpaces feature set and use cases. We’ll also take a look at it from the viewpoint of an IT professional, and then we’ll switch roles and see what it looks like from the user’s point of view.

WorkSpaces Feature Set
Amazon WorkSpaces provides, as I have already mentioned, a desktop computing experience in the cloud. It is easy to provision and maintain, and can be accessed from a wide variety of client devices.

Each WorkSpaces user can install the client application on the device of their choice. After a quick download, they have access to a complete Windows 7 experience in the cloud, with persistent storage, bundled utilities and productivity applications, and access to files and other resources on the corporate intranet.

The IT professional chooses to supply each user with a given WorkSpaces Bundle. There are four standard bundles. Here are the hardware specifications for each one:

  • Standard – 1 vCPU, 3.75 GiB of memory, and 50 GB of persistent user storage.
  • Standard Plus – 1 vCPU, 3.75 GiB of memory, and 50 GB of persistent user storage.
  • Performance – 2 vCPU, 7.5 GiB of memory, and 100 GB of persistent user storage.
  • Performance Plus – 2 vCPU, 7.5 GiB of memory, and 100 GB of persistent user storage.

All of the bundles include Adobe Reader, Adobe Flash, Firefox, Internet Explorer 9, 7-Zip, the Java Runtime Environment (JRE), and other utilities. The Standard and Performance Plus bundles also include Microsoft Office Professional and Trend Micro Worry-Free Business Security Services. The bundles can be augmented and customized by the IT professional in order to meet the needs of specific users.

Each user has access to between 50 and 100 GB of persistent AWS storage from their WorkSpace (the precise amount depends on the bundle that was chosen for the user). The persistent storage is backed up to Amazon S3 on a regular basis, where it is stored with 99.99999999% durability and 99.99% availability over the course of a year.

Pricing is on a per-user, per-month basis, as follows:

  • Standard – $35 / user / month.
  • Standard Plus – $50 / user / month.
  • Performance – $60 / month.
  • Performance Plus – $75 / user / month.

WorkSpaces Use Cases
I believe that you will find many ways to put WorkSpaces to use within your organization after you have spent a little bit of time experimenting with it. Here are a few ideas to get you started:

Mobile Workers – Allow users to access their desktops from iPads, Kindles, and Android tablets so that they can be productive while connected and on-the-go.

Secure WorkSpaces – You can meet stringent compliance requirements and still deliver a managed desktop experience to your users.

Students, Seasonal, and Temporary Workers – Provision WorkSpaces on an as-needed basis so that students, seasonal workers, temporary workers, and consultants can access the applications that they need, then simply terminate the WorkSpace when they leave.

Developers – Provide local and remote developers with the tools that they need to have in order to be productive, while ensuring that source code and other intellectual property are protected.

WorkSpaces for the IT Professional
Let’s take a look at Amazon WorkSpaces through the eyes of an IT professional tasked with providing cloud desktops to some new employees. All of the necessary tasks can be performed from the WorkSpaces Console:

Start by choosing a WorkSpaces profile:

Add new users by name and email address:

You can provision up to five WorkSpaces at a time. They will be provisioned in less than 20 minutes and invitations will be sent to each user via email.

As the administrator, you can manage all of your organization’s WorkSpaces through the console:

WorkSpaces for the User
Ok, now let’s turn the tables and take a look at Amazon WorkSpaces from the user’s point of view!

Let’s say that your administrator has gone through the steps that I outlined above and that a new WorkSpace has been provisioned for you. You will receive an email message like this:

The email will provide you with a registration code and a link to the client download. Download the client to your device, enter the registration code, and start using your WorkSpace:

WorkSpaces delivers a Windows 7 desktop experience:

Persistent storage for the WorkSpace is mapped to the D: drive:

WorkSpaces can also be accessed from iPads, Kindles, and Android tablets. Here’s the desktop experience on the Kindle:

 

Behind the Scenes
If you already know a thing or two about AWS, you may be wondering what happens when you start to use Amazon WorkSpaces.

A Virtual Private Cloud (VPC) is created as part of the setup process. The VPC can be connected to an on-premises network using a secure VPN connection to allow access to an existing Active Directory and other intranet resources.

WorkSpaces run on Amazon EC2 instances hosted within the VPC. Communication between EC2 and the client is managed by the PCoIP (PC-over-IP) protocol. The client connection must allow TCP and UDP connections on port 4172, along with TCP connections on port 443.

Persistent storage is backed up to Amazon S3 on a regular and frequent basis.

Preview WorkSpaces
You can register now in order to get access to the WorkSpaces preview as soon as it is available.

— Jeff;

New AWS Test Drives – Big Data, Security, Microsoft, and More

The AWS Test Drives give you direct and easy access to a wide variety of enterprise solution stacks, all hosted on the AWS Cloud. These labs are available to you to run for a half-day evaluation period at no charge. Each test drive includes a guided video tour and a lab manual, so you will be up and running in a matter of minutes. 

New at re:Invent
Our existing roster includes labs for includes products from Oracle, SAP, Red Hat, Alfresco, and Trend Micro.

We are launching over 30 new Test Drive labs at re:Invent including offerings from Oracle, Microsoft, Infor, Sophos, Accenture, MicroStrategy, and Splunk. You can use these Test Drive labs to learn more about the newest and most sophisticated big data, security, and enterprise products, courtesy of our APN Consulting and Technology partners.

Business Intelligence with QlikView
Are you interested in learning more about Business Intelligence? Check out the QlikView lab from IPC-Global — you will have the QlikView BI client up and running within 10 minutes. You’ll be drilling into data and visualizing the results before you know it. This Test Drive is accessed as a Remote Desktop session and looks like this:

Test Drive Microsoft Applications
The Test Drive program includes an entire section devoted to Microsoft products. There are new labs for SQL Server, SharePoint, and Exchange 2013 from Apparatus, 2nd Watch, InfoReliance, Booz Allen Hamilton, SPAN Systems, and Megalogix.

The Exchange Lab from Apparatus allows you to deploy a High Availability (HA) configuration across three AWS Availability Zones and be sending and receiving email within 30 minutes.

Here’s the architecture that you will evaluate when you launch the Apparatus HA Exchange test lab:

You can log in to Exchange and create users:

Test Drive F5 BIG-IP
The F5 Big IP Test Drive deploys three SharePoint servers in a High Availability configuration behind an F5 BIG-IP Local Traffic Manager (LTM):

More Test Drives
I have talked about just a few of the dozens of Test Drive labs that are now available for you to use. Be sure to check out our Big Data, Microsoft, and Security labs to see our newest labs.

— Jeff;

AWS Management Console – AWS Marketplace Integration

Have you used the AWS Marketplace? You can find, buy, and start using over 800 popular AMIs (Amazon Machine Images) in 24 categories using the Marketplace, with more products added every week.

Today we are making the AWS Marketplace even easier to use by making it accessible from within the EC2 tab of the AWS Management Console. As part of this work, we have also improved the console’s Launch Instance Wizard. Read on to learn more about both of these advances.

Marketplace Integration
You can now choose to search or browse the Marketplace when you click the console’s Launch Instance button by selecting the AWS Marketplace tab. You can browse through all 24 categories without having to leave the console:

You can browse through individual categories (I selected “Business Software”):

You can also enter a search term (in this case I searched for “Analytics”):

Once you find the desired package, the console will show you the pricing, system requirements, ratings, and other important information:

The console will use information supplied by the software vendor to recommend an instance type and create a new Security Group.

You can then proceed to adjust (if necessary) the instance type, finalize the other details, and launch the product, all within the AWS Management Console.

Launch Instance Wizard Improvements
We have improved the Console’s Launch Instance Wizard to make it even easier for you to launch EC2 instances. Searching for public and private AMIs is now instantaneous and the process of choosing instance types and security groups has been simplified. You can now copy rules from an existing security group to a new one, and there’s an auto-complete feature to streamline the process of tagging instances. Finding snapshots and creating volumes from them as part of the launch process is now faster and easier.

The console now groups related EC2 instance types together to allow you to choose the most suitable one more efficiently:

When you start to type a tag name, a popup will offer to complete it for you:

You can select an existing security group and review the rules within it. You can also copy an existing group to a new one with just one click:

You can now search for EBS volume snapshots (including Public Data Sets) when you add storage as part of an instance launch:

As part of this work we have been modernizing and fine-tuning the overall look and feel of the EC2 console. The remaining pages will be updated in the near future. If you have any suggestions, problems, or complaints, please feel free to leave a comment on this post or in the EC2 forum.

— Jeff;

Store and Process Large Sequential Data Sets with EC2 Cluster Instances

To date, we haven’t been very vocal about the performance that is possible when you combine the EC2 Cluster Instances and EBS. I’d like to change that today!

The EC2 Cluster Instance types (CC2, CR1, CG1, HI1, and HS1) support high-performance (10 gigabit) networking between instances and Elastic Block Storage volumes (EBS). Instances of this type make ideal hosts for high-performance relational and NoSQL databases. They are also great for processing workloads that require high throughput, sequential access to large amounts of data.

You can use EBS Provisioned IOPS volumes to create storage arrays that store up to tens of terabytes and provide up to 48,000 16 kilobyte IOPS when accessed from instances of the types listed above. This is equivalent to 768 megabytes per second of data transfer. You can create storage arrays that span multiple EBS volumes by using mdadm. You can use other parallel I/O techniques as well, as is most appropriate for your application and your database.

The CloudWatch graph below shows twelve EBS volumes in action on a CC2 instance, each provisioned for 4,000 IOPS and delivering a consistent 64 megabytes per second per volume for the duration of the test:

In order to achieve this throughput, the volumes were pre-warmed and optimized for queue depth as described in our EBS Volume Performance document.

Each AWS account has a limit of 10,000 Provisioned IOPS and 20 terabytes of EBS storage. If you need more than this, fill out the Request to Increase the EBS Volume Limit form.

— Jeff;