Category: Compute*


Prices Reduced for EC2’s M3 Instances

I am happy to announce that we are reducing the On-Demand and Reserved Instance prices for Amazon EC2’s M3 (Second Generation Standard) instances, effective November 1.

We are lowering the On-Demand prices of M3 instances by 10%. Regardless of which region you are running M3 instances in, these price reductions will automatically be reflected in your AWS charges.

Here are the new and old On-Demand prices in the US East (Northern Virginia) Region. See the EC2 Pricing page for prices in the other Regions:

Instance Type New On-Demand Price Old On-Demand Price
m3.xlarge $0.45/hour $0.50/hour
m3.2xlarge $0.90/hour $1.00/hour

We have also lowered the prices for new M3 Reserved Instances by 15%. This price reduction applies to Reserved Instance purchases made on or after November 1, 2013.

— Jeff;

 

Build 3D Streaming Applications with EC2’s New G2 Instance Type

Do you want to build fast, 3D applications that run in the cloud and deliver high performance 3D graphics to mobile devices, TV sets, and desktop computers?

If so, you are going to love our new G2 instance type! The g2.2xlarge instance has the following specs:

  • NVIDIA GRID (GK104 “Kepler”) GPU (Graphics Processing Unit), 1,536 CUDA cores and 4 GB of video (frame buffer) RAM.
  • Intel Sandy Bridge processor running at 2.6 GHz with Turbo Boost enabled, 8 vCPUs (Virtual CPUs).
  • 15 GiB of RAM.
  • 60 GB of SSD storage.

The instances run 64-bit code and make use of HVM virtualization; EBS-Optimized instances are also available. They are initially available in the US East (Northern Virginia), US West (Northern California), US West (Oregon), and EU (Ireland) Regions. You can launch them On-Demand or Spot Instances, and you can also purchase Reserved Instances.

The g2.2xlarge is another member of our GPU Instance family, joining the existing CG1 instance type. The venerable (and widely used) cg1.4xlarge instance type is a great fit for HPC (High Performance Computing) workloads.  The GPGPU (General Purpose Graphics Processing Unit) in the cg1 offers double-precision floating point and error-correcting memory.  In contrast, the GPU in the g2.2xlarge works on single-precision floating point values, and does not support error-correcting memory.

What’s a GPU?
Let’s take a step back and examine the GPU concept in detail.

As you probably know, the display in your computer or your phone is known as a frame buffer. The color of each pixel on the display is determined by the value in a particular memory location. Back when I was young, this was called memory-mapped video. It was relatively easy to write code to compute the address corresponding to a particular point on the screen and to set the value (color) of a single pixel as desired. If you wanted to draw a line, rectangle, or circle, you (or some graphics functions running on your behalf) would need to compute the address of each pixel in the figure, one at a time. This was easy to implement, but relatively slow.

Moving ahead, as games (one of the primary drivers of consumer-level 3D processing) became increasingly sophisticated, they implemented advanced rendering features such as texturing, shadows, and anti-aliasing. Each of these features contributed to the realism and the “wow factor” of the game, while requiring ever-increasing amounts of compute power for rendering. Think back just a decade or so, when gamers would routinely compare the FPS (frames per second) metrics of their games when running on various types of hardware.

It turns out that many of these advanced rendering features shared an interesting property. The computations needed to texture or anti-alias a particular pixel are independent of those required for the other pixels in the same scene. Moving some of this computation into specialized, highly parallel hardware (the GPU) reduced the load on the CPU and enabled the development of games that were even more responsive, detailed, and realistic.

The game (or other application) sends high-level operations to the GPU, the GPU does its magic for hundreds or thousands of pixels at a time, and the results end up in the frame buffer, where they are copied to the video display, with a refresh rate that is generally around 30 frames per second.

Here’s a block diagram of the NVIDIA GRID GPU in the g2 instance:

GPU in the Cloud?
If you followed my explanation above, recall that the GPU deposited the final pixels in the frame buffer for display. This is wonderful if you are running the application on your desktop or mobile device, but does you very little good if your application is running in the cloud.

The GRID GPU incorporates an important feature that makes it ideal for building cloud-based applications. If you examine the diagram above, you will see that the NVIFR and NVFBC components are connected to the frame buffer and to the NVENC component. When used together (NVIFR + NVENC or NVFBC + NVENC), you can create an H.264 video stream of your application using dedicated, hardware-accelerated video encoding. This stream can be displayed on any client device that has a compatible video codec. A single GPU can support up to eight real-time HD video streams (720p at 30 fps) or up to four real-time FHD video streams (1080p at 30 fps).

Put it all together and your applications can now run in the cloud, take advantage of the CPU power of the g2.2xlarge, the 3D rendering of the GRID GPU, along with access to AWS storage, messaging, and database resources to generate interactive content that can be viewed in a wide variety of environments!

The GPU Software Stack
You have access to a very wide variety of 3D rendering technologies when you use the g2 instances. Your application does its drawing using OpenGL or DirectX.

If you would like to make the technical investment you can also make use of the NVIDIA GRID SDK for low-level frame grabbing and video encoding. At the next level up you have two options:

  • You can grab the entire display using NVIDIA’s NVFBC (Full-frame Buffer Capture) APIs. In this model you generate one video stream per instance. It is relatively easy to modify existing GPU-aware applications to make use of this option.
  • You can grab an individual render target using NVIDIA’s NVIFR (In-band Frame Readback) APIs. You can generate multiple streams per instance, but you will spend more time adapting your application to this programming model.

To simplify the startup process, NVidia has put together AMIs for Windows and Amazon Linux and has made them available in the AWS Marketplace:

You can also download the NVIDIA drivers and install them on your own Windows or Linux instances.

G2 In Action
We’ve been working with technology providers to lay the groundwork for this very exciting launch. Here’s what they have to offer:

  • Autodesk Inventor, Revit, Maya, and 3ds Max 3D design tools can now be accessed from a web browser. Developers can now access full-fledged 3D design, engineering, and entertainment work without the need for a top-end desktop computer (this is an industry first!).
  • OTOY’s ORBX.js is a pure JavaScript framework that allows you to stream 3D application to thin clients and to any HTML5 browser without plug-ins, codecs, or client-side software installation.
  • The Agawi True Cloud application streaming platform now takes advantage of the g2 instance type. It can be used to stream graphically rich, interactive applications to mobile devices.
  • The Playcast Media AAA cloud gaming service has been deployed to a fleet of g2 instances and will soon be used to stream video games for consumer-facing media brands.
  • The Calgary Scientific ResolutionMD application for visualization of medical imaging data can now be run on g2 instances.  The PureWeb SDK can be used to build applications that run on g2 instances and render on any mobile device.

Here are some Marketplace products to get you started:

I generally don’t include quotations in my blog posts, but today I had to make an exception. Brendan Eich (the inventor of JavaScript) definitely grasps the power of this model. Here’ what he had to say when he saw a demo of ORBX.js:

“Think of the amazing 3D games that we have on PCs, consoles, and handheld devices thanks to the GPU. Now think of hundreds of GPUs in the cloud, working for you to over-detail, ray/path-trace in realtime, encode video, do arbitrary (GPGPU) computation.”

I think that pretty much sums it up!

Go Forth and Render
As I noted earlier, you can launch g2.2xlarge instances in four AWS regions today! You should use a product based on the Remote Framebuffer Protocol (RFB), such as a member of the VNC family. TeamViewer is another good choice. If you use a product that is based on RDP, your code will not be able to detect the presence of the GPU.

I am really looking forward to seeing some cool new applications (and perhaps even entirely new classes of applications) running on these instances. Build something cool and let me know about it!

— Jeff;

Federated Users and Temporary Security Credentials for AWS CloudFormation

My colleague Chetan Dandekar brings word of a powerful enhancement to AWS CloudFormation that will make it an even better fit for large-scale corporate deployments.

— Jeff;


AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of AWS resources. Today, we added support for the CloudFormation APIs to be called using temporary security credentials provided by the AWS Security Token Service.

This enables a number of scenarios such as federated users using CloudFormation and authorizing Amazon EC2 instances with IAM roles to call CloudFormation. Before this launch, calling CloudFormation required use of an IAM user or AWS account credentials.

AWS supports federated user access to AWS service APIs and resources. Federated users are managed in an external directory and are granted temporary access AWS services. You now have the option of authorizing federated users to call AWS CloudFormation APIs, as an alternative to creating IAM users to use CloudFormation. A federated user can also sign in and manage CloudFormation stacks from the AWS Management Console (if interested, here is a sample proxy server that demonstrates setting this up).

Consider an example where you have a 100 person IT department. The department would likely have specialists such as network architects, database admins, and application developers. Since CloudFormation enables you to model and provision infrastructure as well as applications, many of those specialists would need access to CloudFormation. Now, the IT department does not have to create an IAM user for each of those employees in order to access CloudFormation. You can choose to authorize existing federated users to use CloudFormation. Also, the IT department can fine tune access, for instance, by authorizing the database admins to call CloudFormation and Amazon RDS, while authorizing the application developers to call CloudFormation, and EC2 or AWS Elastic Beanstalk. Furthermore, when members join or leave the IT department, you do not need to add or remove corresponding IAM users. The following diagram shows the flow of a federated user calling CloudFormation:

IAM roles enable easy and secure access to AWS APIs from EC2 instances, without sharing the long-term security credentials (i.e., access keys). The CloudFormations Describe* APIs can already be called using temporary security credentials generated by assuming an IAM role. We now support the complete set of CloudFormation APIs.

Consider an example where you have a regulatory requirement to perform all AWS resource provisioning and management operations from within an Amazon VPC. One approach to comply would be to provision an EC2 instance inside a VPC, which in turn calls the CloudFormation service to provision and manage infrastructure and applications using your CloudFormation templates. You can now use an IAM role to define permissions that allow calling CloudFormation, and delegate those permissions to the EC2 instance. IAM roles use temporary security credentials and take care of rotating credentials on the instance for you. Here is a visualization of this scenario:

To learn more about the AWS CloudFormation, visit the CloudFormation detail page, documentation or watch this introductory video. We also have a large collection of sample templates that makes it easy to get started with CloudFormation within minutes.

Chetan Dandekar, Senior Product Manager

 

Second Annual EC2 Spotathon – Win up to $5,000 in EC2 Credit

We ran the first EC2 Spotathon last year, and announced the results at the first re:Invent conference. We invited participants to use EC2 Spot Instances to save time and money, and to make their businesses more interesting and efficient. We awarded the Grand Prize to PiCloud and the Runner-Up Prize to Princeton Consultants, with honorable mentions going to Numerate and the Lawrence Berkeley Labs Turbine Science Gateway.

Time for Round 2
I would like to invite you to apply to the Second Annual EC2 Spotathon!

This is an open-ended coding challenge where you can demonstrate how you use Spot Instances to accelerate your applications or to reduce your compute costs. As a participant in this year’s Spotathon, you could win the Grand Prize of $5,000 in AWS Credit or the Runner-Up Prize of $3,000 in AWS Credit. Note that $5,000 is enough for you to run 40 core-years of computation in than a day on Spot Instances, as the folks at Cycle Computing did earlier this year.

The best submissions will be invited to present at a poster session at AWS re:Invent, where we will also announce the Grand Prize and Runner-Up winners.

Apply Soon
To apply, simply use the Spotathon Submission Form to tell us how you have used Spot Instances. Be sure to mail the form to ec2-spotathon@amazon.com by Thursday, October 31. Visit the Spotathon page to learn more and to see the Official Rules.

— Jeff;

New – Modify Instance Type of EC2 Reserved Instance

We launched the first part of the EC2 Reserved Instance Modification feature last month. At that time we gave you the ability to modify the Availability Zone and the Network Platform (EC2-Classic or VPC) of your EC2 Reserved Instances.

Today we are giving you the ability to change the instance type of your Linux/UNIX reservations. For example, you can change a reservation for four m1.small instances into a single reservation for an m1.large instance. With today’s launch you can take advantage of the pricing and capacity benefits of Reserved Instances even as you change from one EC2 instance type to another.

The modifications are always done within an instance family (m1, m2, m3, or c1) and are always expressed in terms of normalized “units.” The units are based on the size of the instance, per the following table:

Instance Size Normalization Factor
small 1
medium 2
large 4
xlarge 8
2xlarge 16
4xlarge 32
8xlarge 64

For example, a large instance is worth 4 units, and can be replaced with four small instances or two medium instances. The number of units will always remain constant across a modification.

As usual, you can make the modification from the AWS Management Console, the EC2 API‘s, or the AWS Command Line Interface (CLI). Let’s use the Console this time:

Open up the EC2 console and click on Reserved Instances. Select the RI’s that you would like to modify, and then click on the Modify Reserved Instances button:

The console will show your existing RI’s in groups. All RI’s that share the same Platform (Linux / UNIX), Region, Instance Type, Product, Tenancy, Offering Type, Term and End Date/Time will be members of the same group:

You then choose replacement instances with sufficient units to account for all of the instances in the original group. You cannot modify the total number of units per RI group.

For example, you can replace four m1.smalls with a single m1.large, by editing the Instance Type and Count properties as follows:

You can alter any or all of the following properties:

Click Continue, review your request to make sure that it is accord with your needs, and then click Submit Request to initiate the specified modifications.

Once initiated, each of the modifications will succeed or fail within two hours. You will see the state of your existing Reserved Instances change, and you will see new Reserved Instances created as the modification proceeds:

You can refresh the console to track the status changes. When none of the status messages contain “pending”, the modification process is complete.

You could also initiate the modification from the command line as follows:

$ ec2-modify-reserved-instances b847fa93-09d0-4fe5-8aa6-8e97b44eaf76 -c "zone=us-east-1e,count=1,type=m1.large" -c "zone=us-east-1e,count=4,type=m1.small"   

Here are a few things to keep in mind as you start to make use of this new feature:

  • Availability of Reserved Instances changes constantly. Therefore, each modification request can succeed or fail on its own. If a particular request fails, you can issue it another time.
  • You can only modify active Reserved Instances that you own, that are not currently listed for sale in the Reserved Instance Marketplace.
  • The Availability Zone and the Network Platform that you request must be available in your account.
  • Modifications take effect as soon as possible, and the pricing benefit starts to apply at the beginning of the current hour.
  • If your Reserved Instance cannot be modified, it will retain its original properties.

We have had a lot of requests for this feature and I hope that you like it. Leave me a comment and let me know what you think!

— Jeff;

 

Amazon EC2 High Storage (HS1) Instances Now Available in Sydney and Singapore

The EC2 instances on the Amazon EC2 Instance Types menu are designed to run a wide variety of workloads.

You can now launch hs1.8xlarge instances in the Asia Pacific (Sydney) and Asia Pacific (Singapore) Regions using the On-Demand or Reserved Instance pricing models. You can now run your Amazon Redshift clusters, storage-intensive Hadoop jobs, and cluster file systems in all three of the AWS Regions in Asia Pacific.

These instances are optimized for very high storage density, low storage cost, and high sequential I/O performance (up to 2.4 Gigabytes per second to be precise). You get access to 48 TB of local storage capacity across 24 hard drives, high network performance, 117 GiB of RAM, and a fast Intel E5-2650 CPU with 16 Virtual CPUs (vCPUs).

As a quick reminder, the EC2 instance collection includes the general purpose family of instances (M1 and M3), the compute-optimized (C1 and CC2), memory-optimized (M2 and CR1), storage optimized (HI1 and HS1), micro (T1), and GPU (CG1) instances. These instances give you a wide variety of storage options, EBS-optimized instances with dedicated throughput to EBS, a cluster networking option for high-bandwidth, low-latency networking between instances in a placement group, and Dedicated Instances that run on single-tenant hardware.

— Jeff;

Amazon Linux AMI 2013.09 Now Available

Max Spevack of the AWS Kernel and Operating Systems (KaOS) team brings news of the latest Amazon Linux AMI.

— Jeff;


Its been another six months, so its time for a fresh release of the Amazon Linux AMI.  Today, we are pleased to announce that the Amazon Linux AMI 2013.09 is available.

This release marks the two year anniversary of the public Amazon Linux AMI GA.  As always, our roadmap and new features are heavily driven by customer requests, so please continue to let us know how we can improve the Amazon Linux AMI for your needs and workloads.

Our 2013.09 release contains several new features that are detailed below.  Our release notes contain additional release information, including more detailed lists of new and updated packages.

  • Kernel 3.4.62 – We have upgraded the kernel to version 3.4.62, which follows the long-term release 3.4.x kernel series that we introduced in the 2013.03 AMI.
  • AWS Command Line Interface 1.1 – The AWS Command Line Interface has celebrated its GA release in the interval since we introduced the Developer Preview version in the 2013.03 Amazon Linux AMI. We provide the latest version of this python-based interface to AWS, including command-line completion for bash and zsh.  The tool is pre-installed on the Amazon Linux AMI as the aws-cli package.
  • GPT partitioning on HVM AMIs – The root device of the Amazon Linux HVM AMI is now partitioned using the GPT format, where previous releases used the MBR format. The partition table can be manipulated by GPT-aware tools such as parted and gdisk.
  • Improved Ruby 1.9 Support – We’ve improved the Ruby 1.9 experience on the Amazon Linux AMI, including the latest patch level (ruby19-1.9.3-448).  Our Ruby 1.9 packages fix several other bugs, including a load issue with rake, and a fixed bigdecimal so that Ruby on Rails is easier to install.  Furthermore, Ruby now has alternatives support in the Amazon Linux AMI. You can switch between Ruby 1.8 and 1.9 with one command.
  • RPM 4.11.1 and Yum 3.4.3 – The core components of RPM and Yum have been updated to newer versions, with RPM 4.11 and Yum 3.4.3 being featured in this release. Both of these updates provide numerous bug fixes, performance improvements, and new features.
  • R 3.0 – Last year we added R to the Amazon Linux AMI repositories based on your requests.  With this release, we have updated R to 3.0.1, following the upstream release of R 3.0.
  • Nginx 1.4.2 – Based on your requests, we have upgraded to Nginx 1.4.2.  This replaces the 1.2.x Nginx packages that we had previously delivered in the Amazon Linux AMI repositories.

The Amazon Linux AMI 2013.09 is available for launch in all regions.

The Amazon Linux AMI is a rolling release, configured to deliver a continuous flow of updates that allow you to move from one version of the Amazon Linux AMI to the next.  In other words, Amazon Linux AMIs are treated as snapshots in time, with a repository and update structure that gives you the latest packages that we have built and pushed into the repository.  If you prefer to lock your Amazon Linux AMI instances to a particular version, please see the Amazon Linux AMI FAQ for instructions

As always, if you need any help with the Amazon Linux AMI, dont hesitate to post on the EC2 forum, and someone from the team will be happy to assist you.

Thank you for using the Amazon Linux AMI!

— Max

P.S.  Help us to build the Amazon Linux AMI! We are actively hiring for Linux Systems Engineer, Linux Software Development Engineer, and Linux Kernel Engineer positions.

 

 

New – Modify EC2 Reserved Instance Reservations

By making good use of Amazon EC2’s Reserved Instances (perhaps guided by the recommendations of the AWS Trusted Advisor or equally capable third-party tools) you can lower your compute costs while also reserving capacity.

Today we are making the Reserved Instance model even more flexible by giving you the power to modify your Reserved Instances (RI’s) when your needs change. You can now move your RI’s between Availability Zones as long as you stay within the same Region. If your AWS account is enabled for EC2-Classic, you can also move your RI’s between EC2-Classic and EC2-VPC. You can now make adjustments to your Reserved Instances as your needs and your architecture change.

Let’s take a tour so that you can see how it works. I’ll use the AWS Management Console, but you can also modify your RI’s using the EC2 API‘s or the AWS Command Line Interface (CLI).

Open up the EC2 console and click on Reserved Instances. Select the RI’s that you would like to modify, and then click on the Modify Reserved Instances button:

The console will show your existing RI’s in groups (all RI’s that share the same Region, Instance Type, Product, Tenancy, Offering Type, Term and End Date/Time will be members of the same group):

You can use the Count field to modify some or all of the instances in each group, and you can use the Add button to insert additional rows to specify further modifications. You can choose to modify the Availability Zone and/or the Network Platform (if your account is enabled for EC2-Classic):

Don’t fret if your AZ menu doesn’t have the same number of zones as mine. This just means that my AWS account is a little bit older than yours and has access to more zones.  The modifications that you specify must address all of the RI’s in each group.

Click Continue, review your request to make sure that it is accord with your needs, and then click Submit Request to initiate the specified modifications.

Once initiated, each of the modifications will either succeed or fail within two hours. You will see the state of your existing Reserved Instances change, and you will see new Reserved Instances created as the modification proceeds.

You can refresh the console to track the status changes. When none of the status messages contain “pending”, the modification process is complete.

Here are a few things to keep in mind as you start to make use of this new feature:

  • Availability of Reserved Instances changes constantly. Therefore, each modification request can succeed or fail on its own. If a particular request fails, you can issue it another time.
  • You can only modify active Reserved Instances that you own, that are not currently listed for sale in the Reserved Instance Marketplace.
  • The Availability Zone and the Network Platform that you request must be available in your account.
  • Modifications take effect as soon as possible, and the pricing benefit starts to apply at the beginning of the current hour.
  • If your Reserved Instance cannot be modified, it will retain its original properties.

Oh yeah, one more thing – when you purchase an RI you get two separate benefits: lower hourly pricing and capacity assurance. The capacity assurance is specific to the Network Platform of the RI. The pricing benefit, however, is not. In other words, if you purchase an RI for one of our cc2.8xlarge instances, you will get the pricing benefit regardless of which Network Platform you use, but you will only get the capacity assurance within the Network Platform of the RI.

I hope that you enjoy this new feature and that you find it useful. Leave me a comment and let me know what you think.

— Jeff;

 

Run SUSE Enterprise Linux Server Using the AWS Free Usage Tier

I’m happy to announce that the AWS Free Usage Tier now includes 750 hours of SUSE Linux Enterprise Server (SLES) usage on a t1.micro instance.

If you are eligible for the free usage tier you now have the option to run SLES as part of your 750 hours of monthly Linux usage.

You can use the AWS Marketplace (1-Click 32-bit, 64-bit) or the AWS Management Console to launch a micro instance running SLES:

Read more about the Free Usage Tier in the Getting Started Guide.

— Jeff;

 

 

AWS OpsWorks in the Virtual Private Cloud

Chris Barclay sent me a nice guest post to announce that AWS OpsWorks is now available in the Virtual Private Cloud.

— Jeff;


I am pleased to announce support for using AWS OpsWorks with Amazon Virtual Private Cloud (Amazon VPC). AWS OpsWorks is a DevOps solution that makes it easy to deploy, customize and manage applications. OpsWorks provides helpful operational features such as user-based ssh management, additional CloudWatch metrics for memory and load, automatic RAID volume configuration, and a variety of application deployment options. You can optionally use the popular Chef automation platform to extend OpsWorks using your own custom recipes. With VPC support, you can now take advantage of the application management benefits of OpsWorks in your own isolated network. This allows you to run many new types of applications on OpsWorks.

For example, you may want a configuration like the following, with your application servers in a private subnet behind a public Elastic Load Balancer (ELB). This lets you control access to your application servers. Users communicate with the Elastic Load Balancer which then communicates with your application servers through the ports you define. The NAT allows your application servers to communicate with the OpsWorks service and with Linux repositories to download packages and updates.

To get started, well first create this VPC. For a shortcut to create this configuration, you can use a CloudFormation template. First, navigate to the CloudFormation console and select Create Stack.  Give your stack a name, provide the template URL http://cloudformation-templates-us-east-1.s3.amazonaws.com/OpsWorksinVPC.template and select Continue. Accept the defaults and select Continue. Create a tag with a key of Name and a meaningful value. Then create your CloudFormation stack.

When your CloudFormation stacks status shows CREATE_COMPLETE, take a look at the outputs tab; it contains several IDs that you will need later, including the VPC and subnet IDs.

You can now create an OpsWorks stack to deploy a sample app in your new private subnet. Navigate to the AWS OpsWorks console and click Add Stack. Select the VPC and private subnet that you just created using the CloudFormation template.

Next, under Add your first layer, click Add a layer. For Layer type box, select PHP App Server. Select the Elastic Load Balancer created in by the CloudFormation template to the Layer and then click Add layer.

Next, in the layers Actions column click Edit. Scroll down to the Security Groups section and select the Additional Group with OpsWorksSecurityGroup in the name. Click the + symbol, then click Save.

Next, in the navigation pane, click Instances, accept the defaults, and then click Add an Instance. This creates the instance in the default subnet you set when you created the stack.

Under PHP App Server, in the row that corresponds to your instance, click start in the Actions column.

You are now ready to deploy a sample app to the instance you created. An app represents code you want to deploy to your servers. That code is stored in a repository, such as Git or Subversion. For this example, we’ll use the SimplePHPApp application from the Getting Started walkthrough.  First, in the navigation pane, click Apps. On the Apps page, click Add an app. Type a name for your app and scroll down to the Repository URL and set Repository URL to git://github.com/amazonwebservices/opsworks-demo-php-simple-app.git, and Branch/Revision to version1. Accept the defaults for the other fields.

When all the settings are as you want them, click Add app. When you first add a new app, it isn’t deployed yet to the instances for the layer. To deploy your App to the instance in PHP App Server layer, under Actions, click Deploy.

Once your deployment has finished, in the navigation pane, click Layers. Select the Elastic Load Balancer for your PHP App Server layer. The ELB page shows the load balancer’s basic properties, including its DNS name and the health status of the associated instances. A green check indicates the instance has passed the ELB health checks (this may take a minute). You can then click on the DNS name to connect to your app through the load balancer.

You can try these new features with a few clicks of the AWS Management Console. To learn more about how to launch OpsWorks instances inside a VPC, see the AWS OpsWorks Developer Guide.

You may also want to sign up for our upcoming AWS OpsWorks Webinar on September 12, 2013 at 10:00 AM PT. The webinar will highlight common use cases and best practices for how to set up AWS OpsWorks and Amazon VPC.

— Chris Barclay, Senior Product Manager