Category: Amazon EC2


EC2 Instance Status Checks and Reporting

Instance Status Checks
You may remember that we recently introduced EC2 Instance Status Monitoring features to give you better visibility into the status of your AWS resources. We began by providing you with information about operational activities that have been scheduled for your EC2 instances. Since then, weve added more functionality.

You can now view status checks to help identify problems that may impair an instances ability to run your applications. These status checks are the results of automated tests performed by EC2 on every running instance that detect hardware and software issues. Whether you are running applications on AWS or elsewhere, diagnosing problems quickly and accurately can be difficult. For example, to determine that a faulty boot sequence has crashed before it initialized an instances networking stack or that an instance has failed to renew its DHCP lease, it helps to confirm first that the instance is powered on, and all networking equipment is performing as expected.

You have told us that you want to know when problems such as these may affect your instances and that you want to be able to distinguish software problems from issues with the underlying infrastructure. To this end, we are introducing two types of status checks for each of your instances: System status checks and Instance status checks. These checks verify that the instance and the operating system are reachable from our monitoring system.

System status checks detect problems with the underlying EC2 systems that are used by each individual instance. The first System status check we are introducing is a reachability check.

  • The System Reachability check confirms that we are able to get network packets to your instance.

System status problems require AWS involvement to repair. We work hard to fix every one as soon it arises, and we are continually driving down their occurrence. However, we also want you to have enough visibility to decide whether you want to wait for our systems to fix the issue or resolve it yourself (by restarting or replacing an instance).

Instance Status checks detect problems within your instance. Typically, these are problems that you as a customer can fix, for example by rebooting the instance or making changes in your operating system. There is currently one Instance status check.

  • The Instance Reachability check confirms that we are able to deliver network packets to the operating system hosted on your instance.

Over time, we will add to these checks as we continue to improve our detection methods.

We are also introducing a reporting system to allow you to provide us with additional information on the status of your EC2 instances.

You can access this functionality from the new DescribeInstanceStatus and ReportInstanceStatus APIs, the AWS Management Console, and the command-line tools.

Console Support
The status of each of your instances is displayed in the instance list:

The console displays detailed information about the status checks when an instance is selected:

You can use the Submit Feedback button to report discrepancies between the reported status and your own observations or to provide more detail about issues you encounter:

We will use the feedback entered in this form to identify issues that might be affecting multiple AWS customers and improve our detection systems accordingly.

Update: A few people have emailed me to ask about the new Status Checks column in the Console’s instance list. If you don’t see it, click on the Show/Hide button and make sure that the Status Checks column is checked:

— Jeff;

How Collections Work in the AWS SDK for Ruby

Today we have a guest blog post from Matty Noble, Software Development Engineer, SDKs and Tools Team. 

– rodica


We’ve seen a few questions lately about how to work with collections of resources in the SDK for Ruby, so I’d like to take a moment to explain some of the common patterns and how to use them. There are many different kinds of collections in the SDK. To keep thing simple, I’ll focus on Amazon EC2, but most of what you’ll see here applies to other service interfaces as well.

Before we do anything else, let’s start up an IRB session and configure a service interface to talk to EC2:

$ irb -r rubygems -r aws-sdk  > ec2 = AWS::EC2.new(:access_key_id => "KEY", :secret_access_key => "SECRET")  

There are quite a few collections available to us in EC2, but one of the first things we need to do in any EC2 application is to find a machine image (AMI) that we can use to start instances. We can manage the images available to us using the images collection:

> ec2.images  => <AWS::EC2::ImageCollection>  

When you call this method, you’ll notice that it returns very quickly; the SDK for Ruby lazy-loads all of its collections, so just getting the collection doesn’t do any work. This is good, because often you don’t want to fetch the entire collection. For example, if you know the ID of the AMI you want, you can reference it directly like this:

> image = ec2.images["ami-310bcb58"]   => <AWS::EC2::Image id:ami-310bcb58>  

Again, this returns very quickly. We’ve told the SDK that we want ami-310bcb58, but we haven’t said anything about what we want to do with it. Let’s get the description:

> image.description   => "Amazon Linux AMI i386 EBS"  

This takes a little longer, and if you have logging enabled you’ll see a message like this:

[AWS EC2 200 0.411906] describe_images(:image_ids=>["ami-310bcb58"])  

Now that we’ve said we want the description of this AMI, the SDK will ask EC2 for just the information we need. The SDK doesn’t cache this information, so if we do the same thing again, the SDK will make another request. This might not seem very useful at first — but by not caching, the SDK allows you to do things like polling for state changes very easily. For example, if we want to wait until an instance is no longer pending, we can do this:

> sleep 1 until ec2.instances["i-123"].status != :pending  

The [] method is useful for getting information about one resource, but what if we want information about multiple resources? Again, let’s look at EC2 images as an example. Let’s start by counting the images available to us:

> ec2.images.to_a.size  [AWS EC2 200 29.406704] describe_images()   => 7677  

The to_a method gives us an array containing all of the images. Now, let’s try to get some information about these images. All collections include Enumerable, so we can use standard methods like map or inject. Let’s try to get all the image descriptions using map:

> ec2.images.map(&:description)  

This takes a very long time. Why? As we saw earlier, the SDK doesn’t cache anything by default, so it has to make one request to get the list of all images, and then one request for each returned image (in sequence) to get the description. That’s a lot of round trips — and it’s mostly wasted effort, because EC2 provides all the information we need in the response to the first call (the one that lists all the images). The SDK doesn’t know what to do with that data, so the information is lost and has to be re-fetched image by image. We can get the descriptions much more efficiently like this:

> AWS.memoize { ec2.images.map(&:description) }  

AWS.memoize tells the SDK to hold on to all the information it gets from the service in the scope of the block. So when it gets the list of images along with their descriptions (and other information) it puts all that data into a thread-local cache. When we call Image#description on each item in the array, the SDK knows that the data might already be cached (because of the memoize block) so it checks the cache before fetching any information from the service.

We’ve just scratched the surface of what you can do with collections in the AWS SDK for Ruby. In addition to the basic patterns above, many of our APIs allow for more sophisticated filtering and pagination options. For more information about these APIs, you can take a look at the extensive API reference documentation for the SDK. Also don’t hesitate to ask questions or leave feedback in our Ruby development forum.

A note about AWS.memoize

AWS.memoize works with both EC2, IAM and ELB; we’d like to extend it to other services, and we’d also like to hear what you think about it. Is the behavior easy to understand? Does it work well in practice? Where would this feature be most beneficial to your application?

New – Elastic Network Interfaces in the Virtual Private Cloud

If you look closely at the services and facilities provided by AWS, you’ll see that we’ve chosen to factor architectural components that were once considered elemental (e.g. a server) into multiple discrete parts that you can instantiate and control individually.

For example, you can create an EC2 instance and then attach EBS volumes to it on an as-needed basis. This is more dynamic and more flexible than procuring a server with a fixed amount of storage.

Today we are adding additional flexibility to EC2 instances running in the Virtual Private Cloud. First, we are teasing apart the IP addresses (and important attributes associated with them) from the EC2 instances and calling the resulting entity an ENI, or Elastic Network Interface. Second, we are giving you the ability to create additional ENIs, and to attach a second ENI to an instance (again, this is within the VPC).

Each ENI lives within a particular subnet of the VPC (and hence within a particular Availability Zone) and has the following attributes:

  • Description
  • Private IP Address
  • Elastic IP Address
  • MAC Address
  • Security Group(s)
  • Source/Destination Check Flag
  • Delete on Termination Flag

A very important consequence of this new model (and one took me a little while to fully understand) is that the idea of launching an EC2 instance on a particular VPC subnet is effectively obsolete. A single EC2 instance can now be attached to two ENIs, each one on a distinct subnet. The ENI (not the instance) is now associated with a subnet.

Similar to an EBS volume, ENIs have a lifetime that is independent of any particular EC2 instance. They are also truly elastic. You can create them ahead of time, and then associate one or two of them with an instance at launch time. You can also attach an ENI to an instance while it is running (we sometimes call this a “hot attach”). Unless the Delete on Termination flag is set, the ENI will remain alive and well after the instance is terminated. We’ll create a ENI for you at launch time if you don’t specify one, and we’ll set the Delete on Terminate flag so you won’t have to manage it. Net-net: You don’t have to worry about this new level of flexibility until you actually need it.

You can put this new level of addressing and security flexibility to use in a number of different ways. Here are some that we’ve already heard about:

Management Network / Backnet – You can create a dual-homed environment for your web, application, and database servers. The instance’s first ENI would be attached to a public subnet, routing 0.0.0.0/0 (all traffic) to the VPC’s Internet Gateway. The instance’s second ENI would be attached to a private subnet, with 0.0.0.0 routed to the VPN Gateway connected to your corporate network. You would use the private network for SSH access, management, logging, and so forth. You can apply different security groups to each ENI so that traffic port 80 is allowed through the first ENI, and traffic from the private subnet on port 22 is allowed through the second ENI.

Multi-Interface Applications – You can host load balancers, proxy servers, and NAT servers on an EC2 instance, carefully passing traffic from one subnet to the other. In this case you would clear the Source/Destination Check Flag to allow the instances to handle traffic that wasn’t addressed to them. We expect vendors of networking and security products to start building AMIs that make use of two ENIs.

MAC-Based Licensing – If you are running commercial software that is tied to a particular MAC address, you can license it against the MAC address of the ENI. Later, if you need to change instances or instance types, you can launch a replacement instance with the same ENI and MAC address.

Low-Budget High Availability – Attach a ENI to an instance; if the instance dies launch another one and attach the ENI to it. Traffic flow will resume within a few seconds.

Here is a picture to show you how all of the parts — VPC, subnets, routing tables, and ENIs fit together:

I should note that attaching two public ENIs to the same instance is not the right way to create an EC2 instance with two public IP addresses. There’s no way to ensure that packets arriving via a particular ENI will leave through it without setting up some specialized routing. We are aware that a lot of people would like to have multiple IP addresses for a single EC2 instance and we plan to address this use case in 2012.

The AWS Management Console includes Elastic Network Interface support:

The Create Network Interface button prompts for the information needed to create a new ENI:

You can specify an additional ENI when you launch an EC2 instance inside of a VPC:

You can attach an ENI to an existing instance:

As always, I look forward to getting your thoughts on this new feature. Please feel free to leave a comment!

— Jeff;

 

New AWS Console Feature – Improved Access to CloudWatch Alarms

We’ve added some features to the AWS Management Console to make it easier for you to create, view, and manage your Amazon CloudWatch Alarms. Here’s a visual overview.

Creating Alarms
You can now create a new CloudWatch alarm from the Monitoring tab of the selected EC2 instance or EBS volume using the Create Alarm button:

Let’s say I want to know when the Network Out traffic for my JeffServer instance exceeds 1.5 Megabytes within a 5 minute period (this instance hosts personal blogs for me and several members of my family, along with some other random web sites, none of which see a whole lot of traffic). I chose this number after inspecting the detailed graph for this metric on this instance:

A click of the Create Alarm button takes me to the new Create Alarm dialog. I can choose the metric and the time interval, and I can also choose the notification method:

I can choose to send notifications to an existing SNS topic, create a new topic, or to a list of email addresses. If I choose the latter option, CloudWatch will automatically create an SNS topic with a suitable name and subscribe the email addresses to the topic. The dialog displays the alarm threshold using a red line superimposed on the actual metrics data:

The Monitoring tab now displays a summary of the alarms for the selected instance (highlighting added):

Clicking on the summary displays a list of alarms:

We hope that you enjoy (and make use of) this handy new feature!

— Jeff;

Behind the Scenes of the AWS Jobs Page, or Scope Creep in Action

The AWS team is growing rapidly and we’re all doing our best to find, interview, and hire the best people for each job. In order to do my part to grow our team, I started to list the most interesting and relevant open jobs at the end of my blog posts. At first I searched our main job site for openings. I’m not a big fan of that site; it serves its purpose but the user interface is oriented toward low-volume searching. I write a lot of blog posts and I needed something better and faster.

Over a year ago I decided to scrape all of the jobs on the site and store them in a SimpleDB domain for easy querying. I wrote a short PHP program to do this. The program takes the three main search URLs (US, UK, and Europe/Asia/South Africa) and downloads the search results from each one in turn. Each set of results consists of a list of URLs to the actual job pages (e.g. Mgr – AWS Dev Support).

Early versions of my code downloaded the job pages sequentially. Since there are now 370 open jobs, this took a few minutes to run and I became impatient. I found Pete Warden’s ParallelCurl and adapted my code to use it. I was now able to fetch and process up to 16 job pages at a time, greatly reducing the time spent in the crawl phase.

// Fetch multple job pages concurrently using ParallelCurl
for ( $i = 0 ; $i < count ( $JobLinks ) ; $i ++ )
{
$PC -> startRequest ( $JobLinks [ $i ] [ ‘Link’ ] , ‘JobPageFetched’ , $i ) ;
}
$PC -> finishAllRequests ( ) ;

My code also had to parse the job pages and to handle five different formatting variations. Once the pages were parsed it was easy to write the jobs to a SimpleDB domain using the AWS SDK for PHP.

Now that I had the data at hand, it was time to do something interesting with it. My first attempt at visualization included a tag cloud and some jQuery code to show the jobs that matched a tag:

I was never able to get this page to work as desired. There were some potential scalability issues because all of the jobs were loaded (but hidden) so I decided to abandon this approach.

I gave upon the fancy dynamic presentation and generated a simple static page (stored in Amazon S3, of course) instead, grouping the jobs by city:

My code uses the data stored in the SimpleDB domain to identify jobs that have appeared since the previous run. The new jobs are highlighted in the yellow box at the top of the page.

I set up a cron job on an EC2 instance to run my code once per day. In order to make sure that the code ran as expected, I decided to have it send me an email at the conclusion of the run. Instead of wiring my email address in to the code, I created an SNS (Simple Notification Service) topic and subscribed to it. When SNS added support for SMS last month, I subscribed my phone number to the same topic.

I found the daily text message to be reassuring, and I decided to take it even further. I set up a second topic and published a notification to it for each new job, in human readable, plain-text form.

The next step seemed obvious. With all of this data in hand, I could generate a tweet for each new job. I started to write the code for this and then discovered that I was reinventing a well-rounded wheel! After a quick conversation with my colleague Matt Wood, it turned out that he already had the right mechanism in place to publish a tweet for each new job.

Matt subscribed an SQS queue to my per-job notification topic. He used a CloudWatch alarm to detect a non-empty queue, and used the alarm to fire up an EC2 instance via Auto Scaling. When the queue is empty, a second alarm reduces the capacity of the group, thereby terminating the instance.

Being more clever than I, Matt used an AWS CloudFormation template to create and wire up all of the moving parts:

  “Resources” : {
“ProcessorInstance” : {
“Type” : “AWS::AutoScaling::AutoScalingGroup” ,
“Properties” : {
“AvailabilityZones” : { “Fn::GetAZs” : “” } ,
“LaunchConfigurationName” : { “Ref” : “LaunchConfig” } ,
“MinSize” : “0” ,
“MaxSize” : “1” ,
“Cooldown” : “300” ,
“NotificationConfiguration” : {
“TopicARN” : { “Ref” : “EmailTopic” } ,
“NotificationTypes” : [ “autoscaling:EC2_INSTANCE_LAUNCH” ,
“autoscaling:EC2_INSTANCE_LAUNCH_ERROR” ,
“autoscaling:EC2_INSTANCE_TERMINATE” ,
“autoscaling:EC2_INSTANCE_TERMINATE_ERROR” ]
}
}
} ,

You can also view and download the full template.

The instance used to process the new job positions runs a single Ruby script, and is bootstrapped from a standard base Amazon Linux AMI using CloudFormation.

The CloudFormation template passes in a simple bootstrap script using instance User Data, taking advantage of the cloud-init daemon which runs at startup on the Amazon Linux AMI. This in turn triggers CloudFormations own cfn-init process, which configures the instance for use based on information in the CloudFormation template.

A collection of packages are installed via the yum and rubygems package managers (including the AWS SDK for Ruby), the processing script is downloaded and installed from S3, and a simple, YAML format configuration file is written to the instance which contains keys, Twitter configuration details and queue names used by the processing script.

queue. poll ( :poll_interval => 10 ) do |msg |

notification = TwitterNotification.new(msg.body)

begin
client.update(notification.update)
rescue Exception => e
log.debug “Error posting to Twitter: #{e}”

  else
    log.debug “Posted: #{notification.update}”
end

end

The resulting tweets show up on the AWSCloud Twitter account.

At a certain point, we decided to add some geo-sophistication to the process. My code already identified the location of each job, so it was a simple matter to pass this along to Matt’s code. Given that I am located in Seattle and he’s in Cambridge (UK, not Massachusetts), we didn’t want to coordinate any type of switchover. Instead, I simple created another SNS topic and posted JSON-formatted messages to it. This loose coupling allowed Matt to make the switch at a time convenient to him.

So, without any master plan in place, Matt and I have managed to create a clean system for finding, publishing, and broadcasting new AWS jobs. We made use of the following AWS technologies:

Here is a diagram to show you how it all fits together:

If you want to hook in to the job processing system, here are the SNS topic IDs:

  • Run complete – arn:aws:sns:us-east-1:348414629041:aws-jobs-process
  • New job found (human readable) – arn:aws:sns:us-east-1:348414629041:aws-new-job
  • New job found (JSON) – arn:aws:sns:us-east-1:348414629041:aws-new-job-json

The topics are all set to be publicly readable so you can subscribe to them without any help from me. If you build something interesting, please feel free to post a comment so that I know about it.

The point of all of this is to make sure that you can track the newest AWS jobs. Please follow @AWSCloud take a look at the list of All AWS Jobs.

— Jeff (with lots of help from Matt);

Now Open – South America (Sao Paulo) Region – EC2, S3, and Much More

With the paint barely dry on our US West (Oregon) Region, we are now ready to expand again. This time we are going South of the Equator, to Sao Paulo, Brazil. With the opening of this new Region, AWS customers in South and Central America can now enjoy fast, low-latency access to the suite of AWS infrastructure services.

New Region
The new South America (Sao Paulo) Region supports the following services:

We already have an Edge Location for Route 53 and CloudFront in Sao Paulo.

The AWS Toolkit for Visual Studio includes the new Region in the dropdown menu. You will need to restart Visual Studio to refresh the menu.

This is our eighth Region, and our first in South America (see the complete AWS Global Infrastructure Map for more information). You can see the full list in the Region menu of the AWS Management Console:

You can launch EC2 instances or store data in the new Region by simply making the appropriate selection from the menu.

New Resources
Portions of the AWS web site are now available in Portuguese. You can switch languages using the menu in the top right:

We now have an AWS team (sales, marketing, and evangelism to start) in Brazil. Our newest AWS evangelist, Jose Papo, is based in Sao Paulo. He will be writing new editions of the AWS Blog in Portuguese and Spanish.

Customers
We already have some great customers in Brazil. Here’s a sampling:

  • Peixe Urbano is the leading online discount coupon site in Brazil. They launched on AWS and have scaled to a top 50 site with no capital expenditure.
  • Gol Airlines (one of the largest in Brazil) uses AWS to provide in-flight wireless service to customers.
  • The R7 news portal is one of the most popular sites in Brazil. The site makes use of CloudFront, S3, and an auto-scaled array of EC2 instances.
  • Orama is a financial institution with a mission of providing better access to investments for all Brazilians. They run the majority of their customer relationship systems on AWS.
  • Ita Cultural is a non-profit cultural institute. The institute’s IT department is now hosting new projects on AWS.
  • Casa & Video is one of Brazil’s largest providers of electronics and home products. They have turned to AWS to handle seasonal spikes in traffic.

Solution Providers
Our ISV and System Integrator partner ecosystem in Brazil includes global companies such as Accenture, Deloitte, and Infor along with local favorites Dedalus and CI&T.

Jeff;

Additional Reserved Instance Options for Amazon EC2

If you have watched the cavalcade of AWS releases over the last couple of years, you may have noticed an interesting pattern. We generally release a new service or a major new feature with a model or an architecture that leaves a lot of room to add more features later. We like to start out simple and to add options based on feedback from our customers.

For example, we launched EC2 Reserved Instances with a pricing model that provides increasing cost savings over EC2’s On-Demand Instances as usage approaches 100% in exchange for a one-time up-front payment to reserve the instance for a one or three year period. If utilization of a particular EC2 instance is at least 24% over the period, then a three year Reserved Instance will result in a costs savings over the user of an On-Demand Instance.

The EC2 Reserved Instance model has proven to be very popular and it is time to make it an even better value! We are introducing two new Reserved Instance models: Light Utilization and Heavy Utilization Reserved Instances. You can still leverage the original Reserved Instance offerings, which will now be called Medium Utilization Reserved Instances, as well as Spot and On-Demand Instances.

If you run your servers more than 79% of the time, you will love our new Heavy Utilization Reserved Instances. This new model is a great option for customers that need a consistent baseline of capacity or run steady state workloads. With the Heavy Utilization model, you will pay a one-time up-front fee for a one year or three year term, and then you’ll pay a much lower  hourly fee (based on the number of hours in the month) regardless of your actual usage. In exchange for this commitment you’ll be able to save up to 58% over On-Demand Instances.

Alternatively, you might have periodic workloads that run only a couple of hours a day or a few days per week. Perhaps you use AWS for Disaster Recovery, and you use reserved capacity to allow you to meet potential demand without notice. For these types of use cases, our new Light Utilization Reserved Instances allow you to lower your overall costs by up to 33%, allowing you to pay the lowest available upfront fee for the Reserved Instance with a slightly higher hourly rate when the instance is used.

One way to determine which pricing model to use is by choosing the Reserved Instance pricing model assuming you are optimizing purely for the lowest effective cost. The chart below illustrates your effective hourly cost at each level of utilization. As you can see, if your instances run less than ~15% of the time or if you are not sure how long you will need your instance, you should choose On-Demand instances. Alternatively, if you plan to use your instance between ~15% and ~40% of the time, you should use Light Utilization Reserved Instances. If you plan to use your instance more than ~40% of the time and want the flexibility to shut off your instance if you dont need it any longer, then the Medium Utilization Reserved Instance is the best option. Lastly, if you plan to use your instance more than ~80% of the time (basically always running), then you should choose Heavy Utilization Reserved Instances.

Alternatively, you may want to purchase your Reserved Instances by trading off the amount of savings to the upfront fee. As an example, lets evaluate the 3-year m1.xlarge Reserved Instance. Light Utilization Reserved Instances save you up to 33% off the On-Demand price, and just require a $1200 upfront fee. These Reserved Instances provide a great way to get a discount off of the On-Demand price with very little commitment. Medium Utilization Reserved instances save you up to 49% off the On-Demand price and require a $2800 upfront fee. The second option provides you the flexibility to turn off your instances and receive a significant discount for a little higher upfront fee. Heavy Utilization Reserved instances save the most, around 59%, but require an upfront fee of $3400. Additionally, with this option you are committing to pay for your instance whether you plan to use it or not. This option provides significant savings assuming that you know you plan to run your instance all or most of the time. The following table illustrates these savings:

As an example, let’s say that you run a web site, and that you always need at least two EC2 instances. For the US business day you generally need one or two more, and at peak times you need three additional instances on top of that. Once a day you also do a daily batch run that takes two or three hours on one instance. Here’s one way to use the various types of EC2 instances to your advantage:

  • Two Heavy Utilization Reserved Instances – Handle normal load.
  • Two Medium Utilization Reserved Instances, as needed – Scale up for the US business day.
  • Three On-Demand Instances, as needed – Scale up more for peak times.
  • One Light Utilization Reserved Instance, as needed – Handle daily batch run.

Or, graphically:

The usage payment for Heavy Utilization and Light Utilization Reserved Instances will be billed monthly along with your normal AWS usage.

We’ve also streamlined the process of purchasing Reserved Instances using the AWS Management Console so that it is now easier to purchase multiple Reserved Instances at once (the console has acquired a shopping cart):

We are continuously working to find ways to pass along additional cost savings, and we believe that you will like these new Reserved Instance options. We recommend that you review your current usage to determine if you can benefit from using these changes. To learn more about this feature and all available Amazon EC2 pricing options, please visit the Amazon EC2 Pricing and the Amazon EC2 Reserved Instance pages.

— Jeff;

New – AWS Elastic Load Balancing Inside of a Virtual Private Cloud

The popular AWS Elastic Load Balancing Feature is now available within the Virtual Private Cloud (VPC). Features such as SSL termination, health checks, sticky sessions and CloudWatch monitoring can be configured from the AWS Management Console, the command line, or through the Elastic Load Balancing APIs.

When you provision an Elastic Load Balancer for your VPC, you can assign security groups to it. You can place ELBs into VPC subnets, and you can also use subnet ACLs (Access Control Lists). The EC2 instances that you register with the Elastic Load Balancer do not need to have public IP addresses. The combination of the Virtual Private Cloud, subnets, security groups, and access control lists gives you precise, fine-grained control over access to your Load Balancers and to the EC2 instances behind them and allows you to create a private load balancer.

Here’s how it all fits together:

When you create an Elastic Load Balancer inside of a VPC, you must designate one or more subnets to attach. The ELB can run in one subnet per Availability Zone; we recommend (as shown in the diagram above) that you set aside a subnet specifically for each ELB. In order to allow for room (IP address space) for each ELB to grow as part of the intrinsic ELB scaling process, the subnet must have at least 8 free IP addresses.

We think you will be able to put this new feature to use right away. We are also working on additional enhancements, including IPv6 support for ELB in VPC and the ability to use Elastic Load Balancers for internal application tiers.

— Jeff;

 

Webinar: Getting Started on Microsoft Windows With AWS

We’re going to be running a free webinar on December 8th at 9 AM PST.

Designed for business and technical decision makers with an interest in migrating Windows Server and Windows Server applications to the AWS cloud, the webinar will address the following topics:

  • Support for the Microsoft .NET platform.
  • Ways for you to take advantage of your existing Microsoft investments and skill set to run Windows Server applications such as Microsoft SharePoint Server, Microsoft Exchange Server, and SQL Server on the AWS Cloud without incurring any additional Microsoft licensing costs.
  • The AWS pay-as-you go model that allows you to purchase Windows Server computing resources on an hourly basis.
  • An architecture that allows you to quickly and easily scale your Windows Server applications on the AWS Cloud through pre-configured Amazon Machine Images (AMIs) to start running fully supported Windows Server virtual machine instances in minutes.

Please register now and join us on December 8th, at 09:00 am 10:00 am PST.

— Jeff;

 

EC2 Instance Status Monitoring

We have been hard at work on a set of features to help you (and the AWS management tools that you use) have better visibility into the status of your AWS resources. We will be releasing this functionality in stages, starting today. Today’s release gives you visibility into scheduled operational activities that might affect your EC2 Instances. A scheduled operational activity is an action that we must take on your instance.

Today, there are three types of activities that we might need to undertake on your instance: System Reboot, Instance Reboot, or Retirement. We do all of these things today, but have only been able to tell you about them via email notifications. Making the events available to you through our APIs and the AWS Management Console will allow you to review and respond to them either manually or programmatically.

System Reboot – We will schedule a System Reboot when we need to do maintenance on the hardware or software supporting your instance. While we are able to perform many upgrades and repairs to our fleet without scheduling reboots, certain operations can only be done by rebooting the hardware supporting an instance. We wont schedule System Reboots very often, and only when absolutely necessary. When an instance is scheduled for a System Reboot, you will be given a time window during which the reboot will occur. During that time window, you should expect your instance to be cleanly rebooted.  This generally takes about 2-10 minutes but depends on the configuration of your instance. If your instance is scheduled for a System Reboot, you can consider replacing it before the scheduled time to reduce impact to your software or you may wish to check on it after it has been rebooted. 

Instance Reboot – An Instance Reboot is similar to a System Reboot except that a reboot of your instance is required rather than the underlying system. Because of this, you have the option of performing the reboot yourself.  You may choose to perform the reboot yourself to have more control and better integrate the reboot into your operational practices. When an instance is scheduled for an Instance Reboot, you can choose to issue an EC2 reboot (via the AWS Management Console, the EC2 APIs or using other management tools) before the scheduled time and the Instance Reboot will be completed at that time. If you do not reboot the instance via an EC2 reboot, your instance will be automatically rebooted during the scheduled time.

Retirement – We will schedule an instance for retirement when it can no longer be maintained. This can happen when the underlying hardware has failed in a way that we cannot repair without damaging the instance. An instance that is scheduled for retirement will be terminated on or after the scheduled time. If you no longer need the instance, you can terminate it before its retirement date. If it is an EBS-backed instance, you can stop and restart the instance and it will be migrated to different hardware. You should stop/restart or replace any instance that is scheduled for retirement before the scheduled retirement date to avoid interruption to your application.

API Calls
The new DescribeInstanceStatus function returns information about the scheduled events for some or all of your instances in a particular Region or Availability Zone. The following information is returned for each instance:

  • Instance State – The intended state of the instance (pending, running, stopped, or terminated).
  • Rebooting Status An indication of whether or not the instance has been scheduled for reboot, including the scheduled date and time (if applicable).
  • Retiring Status – An indication of whether or not the instance has been scheduled for retirement, including the scheduled date and time (if applicable).

We’ll continue to send “degraded instance” notices to our customers via email to notify you of retiring instances.

Console Support
You can view upcoming events that are scheduled for your instances in the EC2 tab of the AWS Management Console. The EC2 Console Dashboard contains summary information on scheduled events:

You can also click on the Scheduled Events link to view detailed information:

The instance list displays a new “event” icon next to any instance that has a scheduled event:

I expect existing monitoring and management tools to make use of these new APIs in the near future. If you’ve done this integration, drop me a note and I’ll update this post.

— Jeff;