Amazon Web Services Blog

  • AWS Console Mobile App Update - Support for Elastic Beanstalk

    We have updated the AWS Console mobile app with support for AWS Elastic Beanstalk. I'll let the app's update notice tell you what's new:

    Let's take a quick look at the new features! The main screen of the app includes a new Beanstalk Applications section:

    I can see all of my Elastic Beanstalk applications:

    From there I can zoom and see the Elastic Beanstalk environments for any desired application:

    Diving even deeper, I can focus on a particular environment:

    I can open up individual sections to see what's going on. Here's the Configuration section:

    If a CloudWatch alarm fires, I can open up the CloudWatch Alarms section to see what's going on:

    I can also take a detailed look at a particular CloudWatch metric:

    I can also perform certain operations on the environment:

    For example, I can deploy any desired version of the application to the environment:

    Download & Install Now
    The new version of the AWS Console mobile app is available now and you can start using it today. Here are the download links:

    -- Jeff;

  • AWS Support - Now Hiring!

    Now Hiring
    As is the case with many parts of AWS, the team behind AWS Support is growing fast and is looking for top-notch people to fill a multitude of open positions. Here are some of the positions that they are working to fill (click through to apply or to read a detailed job description):

    Cloud Support Engineer - (Dallas, Texas) - You get to field, troubleshoot, and manage technical customer issues via phone, chat and email. You help to recreate customer issues and build proof-of-concept applications, and you represent the voice of the customer to internal AWS teams. You can share your knowledge with the AWS community by creating tutorials, how-to videos, and technical articles.

    Big Data Devops Support Engineer (Dallas, Texas) - This position is similar to the previous one, but you'll need to have some experience with popular Big Data tools such as Amazon Elastic MapReduce, Zookeeper, HBase, HDFS, Pig, or Hive. If you know what ETL is all about and can apply them using Hadoop, that's even better!

    Cloud Support Engineer-Deployment (Dallas, Texas) - This position is similar to the first one, with a focus on development and deployment. You'll need experience with Java, .NET, Node.JS, PHP, Python, or Ruby, familiarity with DevOps principles, and should be able to work with all tiers of the application stack from the OS on up.

    Cloud Technical Account Manager (Dallas, Texas) - In this role you will work directly with customers to build mindshare and broad usage of AWS within their organizations. You will be their primary technical contact for one or more customers and you'll get to help them to plan, debug, and monitor their cloud-based, business-critical applications. You will get to help them to scale for world-class events and you'll represent the customers' needs to the rest of the AWS team.

    Senior Cloud Technical Account Manager (Dallas, Texas) - This is a more senior version of the previous position! It requires significant IT, AWS, and distributed systems expertise and experience.

    Operations Manager-AWS Support (Dallas, Texas) - In this role you will use your operational, leadership, and technical skills to lead a team of Cloud Support Engineers.

    Software Development Engineer, Kumo Development Team (Seattle, Washington) - In this role you will help to build the next generation of CRM systems to help us to improve the overall support experience for AWS customers. Experience with data mining, information retrieval, and text analysis is a definite plus for this position.

    Senior Manager, Product Management, Amazon Web Services (Seattle, Washington) - In this role you will be responsible for creating the vision and the product strategy for AWS Support's products and services. You'll get to manage the entire product life cycle, starting with strategic planning all the way through to tactical execution. You will need a strong product management track record and you'll need to show us that you know how to translate customer needs in to features, pricing models, and merchandising opportunities.

    Senior Product Manager, Amazon Web Services (Seattle, Washington) - In this role you will create marketing materials for support offerings, define and manage marketing programs, think about produce and service positioning, and champion the needs of our customers. You'll need a strong marketing background, ideally with experience in the IT industry.

    Senior Technical Program Manager (Seattle, Washington) - In this role you will lead product initiatives, working closely with customers, development teams, vendors, partners, and the AWS service teams. You'll need program management or project management skills and a strong technical background!

    Even More Positions
    The positions that I listed above are based in Dallas and Seattle. If you are interested in similar positions in other cities and countries, please check out these links:

    More About Support
    To learn more about AWS Support, watch this video:

    Candidates often ask me for special insider tips that will help them to navigate the Amazon hiring process! My answer is always the same -- spend some time studying the Amazon Leadership Principles and make sure that you can relate them to significant events in your career and in your personal life.

    -- Jeff;

  • Resource Groups and Tagging for AWS

    For many years, AWS customers have used tags to organize their EC2 resources (instances, images, load balancers, security groups, and so forth), RDS resources (DB instances, option groups, and more), VPC resources (gateways, option sets, network ACLS, subnets, and the like) Route 53 health checks, and S3 buckets. Tags are used to label, collect, and organize resources and become increasingly important as you use AWS in larger and more sophisticated ways. For example, you can tag relevant resources and then take advantage AWS Cost Allocation for Customer Bills.

    Today we are making tags even more useful with the introduction of a pair of new features: Resource Groups and a Tag Editor. Resource Groups allow you to easily create, maintain, and view a collection of resources that share common tags. The new Tag Editor allows you to easily manage tags across services and Regions. You can search globally and edit tags in bulk, all with a couple of clicks.

    Let's take a closer look at both of these cool new features! Both of them can be accessed from the new AWS menu:

    Tag Editor
    Until today, when you decided to start making use of tags, you were faced with the task of stepping through your AWS resources on a service-by-service, region-by-region basis and applying tags as needed. The new Tag Editor centralizes and streamlines this process.

    Let's say I want to find and then tag all of my EC2 resources. The first step is to open up the Tag Editor and search for them:

    The Tag Editor searches my account for the desired resource types across all of the selected Regions and then displays all of the matches:

    I can then select all or some of the resources for editing. When I click on the Edit tags for selected button, I can see and edit existing tags and add new ones. I can also see existing System tags:

    I can see which values are in use for a particular tag by simply hovering over the Multiple values indicator:

    I can change multiple tags simultaneously (changes take effect when I click on Apply changes):

    Resource Groups
    A Resource Group is a collection of resources that shares one or more tags. It can span Regions and services and can be used to create what is, in effect, a custom console that organizes and consolidates the information you need on a per-project basis.

    You can create a new Resource Group with a couple of clicks. I tagged a bunch of my AWS resources with Service and then added the EC2 instances, DB instances, and S3 buckets to a new Resource Group:

    My Resource Groups are available from within the AWS menu:

    Selecting a group displays information about the resources in the group, including any alarm conditions (as appropriate):

    This information can be further expanded:

    Each identity within an AWS account can have its own set of Resource Groups. They can be shared between identities by clicking on the Share icon:

    Down the Road
    We are, as usual, very interested in your feedback on this feature and would love to hear from you! To get in touch, simply open up the Resource Groups Console and click on the Feedback button.

    Available Now
    Resource Groups and the Tag Editor are available now and you can start using them today!

    -- Jeff;

  • EC2 Container Service In Action

    We announced the Amazon EC2 Container Service at AWS re:Invent and invited you to join the preview. Since that time, we've seen a lot of interest and a correspondingly high signup rate for the preview. With the year winding down, I thought it would be fun to spend a morning putting the service through its paces. We have already approved all existing requests to join the preview; new requests are currently being approved within 24 hours.

    As I noted in my earlier post, this new service will help you to build, run, and scale Docker-based applications. You'll benefit from easy cluster management, high performance, flexible scheduling, extensibility, portability, and AWS integration while running in an AWS-powered environment that is secure and efficient.

    Quick Container Review
    Before I dive in, let's take a minute to review some of the terminology and core concepts implemented by the Container Service.

    • Cluster - A logical grouping of Container Instances that is used to run Tasks.
    • Container Instance - An EC2 instance that runs the ECS Container Agent and that has been registered into a Cluster. The set of instances running within a Cluster create a pool of resources that can be used to run Tasks.
    • Task Definition - A description of a set of Containers. The information contained in a Task Description defines one or more Containers. All of the Containers defined in a particular Task Definition are run on the same Container Instance.
    • Task - An instantiation of a Task Definition.
    • Container - A Docker container that was created as part of a Task.

    The ECS Container Agent runs on Container Instances. It is responsible for starting Containers on behalf of ECS. The agent itself runs within a Docker container (available on Docker Hub) and communicates with the Docker daemon running on the Instance.

    When talking about a cluster or container service, "scheduling" refers to the process of assigning tasks to instances. The Container Service provides you with three scheduling options:

    1. Automated - The RunTask function will start a Task (as specified by a Task Definition) on a Cluster using random placement.
    2. Manual - The StartTaskfunction will start a Task (again, as specified by a Task Definition) on a specified Container Instance (or Instances).
    3. Custom - You can use the ListContainerInstances and DescribeContainerInstances functions to gather information about available resources within a Cluster, implement the "brain" of the schedule (in other words, use the available information to choose a suitable Container Instance), and then call StartTask to start a task on the Instance. When you do this you are, in effect, creating your own implementation of RunTask.

    EC2 Container Service in Action
    In order to gain some first-hand experience with ECS, I registered for the preview and then downloaded, installed, and configured a preview version of the AWS CLI. Then I created an IAM Role and a VPC and set about to create my cluster (ECS is currently available in US East (Northern Virginia) with support for other Regions expected in time). I ran the following command:

    $ aws ecs create-cluster --cluster-name MyCluster --profile jbarr-cli
    

    The command returned information about my new cluster as a block of JSON:

    {
        "cluster": {
            "clusterName": "MyCluster", 
            "status": "ACTIVE", 
            "clusterArn": "arn:aws:ecs:us-east-1:348414629041:cluster/MyCluster"
        }
    }
    

    Then I launched a couple of EC2 instances into my VPC using an ECS-enabled AMI that had been shared with me as part of the preview process (this is a very lightweight version of the Amazon Linux AMI, optimized and tuned for ECS). I chose my new IAM Role (ecs) as part of the launch process:

    I also edited the instance's User Data to make the instance launch in to my cluster:

    After the instances launched I was able to see that they were part of my cluster:

    $ aws ecs list-container-instances --cluster MyCluster --profile jbarr-cli
    
    {
        "containerInstanceArns": [
            "arn:aws:ecs:us-east-1:348414629041:container-instance/4cf62484-da62-49a5-ad32-2015286a6d39", 
            "arn:aws:ecs:us-east-1:348414629041:container-instance/be672053-0ff8-4478-b136-7fae9225e493"
        ]
    }
    

    I can choose an instance and query it to find out more about the registered and available CPU and memory resources:

    $ aws ecs describe-container-instances --cluster MyCluster \
      --container-instances arn:aws:ecs:us-east-1:348414629041:container-instance/4cf62484-da62-49a5-ad32-2015286a6d39 \
      --profile jbarr-cli
    

    Here's an excerpt from the returned data:

    {
                "registeredResources": [
                    {
                        "integerValue": 1024, 
                        "longValue": 0, 
                        "type": "INTEGER", 
                        "name": "CPU", 
                        "doubleValue": 0.0
                    }, 
                    {
                        "integerValue": 3768, 
                        "longValue": 0, 
                        "type": "INTEGER", 
                        "name": "MEMORY", 
                        "doubleValue": 0.0
                    }
                ]
    }
    

    Following the directions in the Container Service Developer Guide, I created a simple task definition and registered it:

    $ aws ecs register-task-definition --family sleep360 \
      --container-definitions file://$HOME/tmp/task.json \
      --profile jbarr-cli
    

    Then I ran 10 copies of the task:

    aws ecs run-task --cluster MyCluster --task-definition sleep360:1 --count 10 --profile jbarr-cli
    

    And I listed the running tasks:

    $ aws ecs list-tasks --cluster MyCluster --profile jbarr-cli
    

    This is what I saw:

    {
        "taskArns": [
            "arn:aws:ecs:us-east-1:348414629041:task/0c949733-862c-4979-b5bd-d4f8b474c58e", 
            "arn:aws:ecs:us-east-1:348414629041:task/3ababde9-08dc-4fc9-b005-be5723d1d495", 
            "arn:aws:ecs:us-east-1:348414629041:task/602e13d2-681e-4c87-a1d9-74c139f7335e", 
            "arn:aws:ecs:us-east-1:348414629041:task/6d072f42-75da-4a84-8b68-4841fdfe600d", 
            "arn:aws:ecs:us-east-1:348414629041:task/6da6c947-8071-4111-9d31-b87b8b93cc53", 
            "arn:aws:ecs:us-east-1:348414629041:task/6ec9828a-cbfb-4a39-b491-7b7705113ad2", 
            "arn:aws:ecs:us-east-1:348414629041:task/87e29ab2-34be-4495-988b-c93ac1f8b77c", 
            "arn:aws:ecs:us-east-1:348414629041:task/ad4fc3cc-7e80-4681-b858-68ff46716fe5", 
            "arn:aws:ecs:us-east-1:348414629041:task/cdd221ea-837c-4108-9577-2e4f53376c12", 
            "arn:aws:ecs:us-east-1:348414629041:task/eab79263-087f-43d3-ae4c-1a89678c7101"
        ]
    }
    

    I spent some time describing the tasks and wrapped up by shutting down the instances. After going through all of this (and making a mistake or two along the way due to being so eager to get a cluster up and running), I'll leave you with three simple reminders:

    1. Make sure that your VPC has external connectivity enabled.
    2. Make sure to use the proper, ECS-enabled AMI.
    3. Make sure to launch the AMI with the requisite IAM Role.

    ECS Quickstart Template
    We have created an ECS Quickstart Template for CloudFormation to help you to get up and running even more quickly. The template creates an IAM Role and an Instance Profile for the Role. The Role supplies the permission that allows the ECS Agent to communicate with ECS. The template launches an instance using the Role and returns an SSH command that can be used to access the instance. You can launch the instance in to an existing cluster, or you can use the name "default" to create (if necessary) a default cluster. The instance is always launched within your Default VPC.

    Contain Yourself
    If you would like to get started with ECS, just register now and we'll get you up and running as soon as possible.

    To learn more about ECS, spend 30 minutes watching this session from re:Invent (one caveat: the video is already a bit dated; for example, Task Definitions are no longer versioned):

    You can also register for our upcoming (January 14th, 2015) webinar, Amazon EC2 Container Service Deep Dive. In this webinar, my colleague Deepak Singh will will talk about why we built EC2 Container Service, explain some of the core concepts, and show you how to use the service for your applications.

    CoreOS is a new Linux distribution designed to support the needs of modern infrastructure stacks. The CoreOS AMI now supports ECS; you can read the Amazon ECS on CoreOS documentation to learn more.

    As always, we are interested in your feedback. With ECS still in preview mode, now is the perfect time for you to let us know more about your needs. You can post your feedback to the ECS Forum. you can also create AWS Support cases if you are in need of assistance.

    -- Jeff;

  • New Amazon CloudFront Reporting - Learn More About Your Viewers

    16 Dec 2014 in Amazon CloudFront | permalink

    My colleague Jarrod Guthrie sent me a blog post with information about four new reports for CloudFront.

    -- Jeff;


    Amazon CloudFront continues to add reporting features. Recently, launched we've usage charts, cache statistics reports, a popular object report, and near-real time operational metrics via Amazon CloudWatch. Today, CloudFront added four more reports that will give you more visibility into who your end users are - Locations, Browsers, and OS (all grouped under Viewer reports) and a Top Referers report.

    Viewer Reports
    The Viewer reports include three different ways to look at your users: by the browser that they're using, by the operating system that they're using, and by their geographic location.

    Browsers - The Browsers report shows top browsers that your end users used to access your content. You can display either a bar chart or a pie chart by browser name or by browser name and version:

    Here's what the pie chart looks like:

    The Browser Trends report shows daily trends in requests by browser:

    Operating Systems - As with the Browser reports, the Operating Systems report show the top operating systems that your end users are using as well as daily trends by operating system, either as a bar chart or a pie chart:

    Locations - You can see the top 50 countries where your end user requests are coming from or the top U.S. states. The report includes the request count from each country, the number of requests in a country as a percent of total requests, and the number of bytes delivered to your end users in each country. You can also display a trends chart that shows how request counts are changing for selected countries or states.

    Referrer Reports
    Any request for an object on your domain has a HTTP header caller Referer. This header indicates the URL of the webpage from where the request to the object on your domain (or website) was made. The referer header could reference a search engine, another website that links directly to your objects, or your own website. Here's a sample report:

    Available Now
    The new reports are available now under the Reporting & Analytics section of the CloudFront Management console:

    For more information on the CloudFront reporting features, please visit the CloudFront Reports & Analytics page.

    If you want to learn even more about these new features, join our upcoming CloudFront Office Hours webinar. on Jan 21st 2015 at 10:00 AM (PT). You can register for this webinar by visiting our Webinars & Videos page and clicking the registration link under the Office Hours Series. Be sure to bookmark this page and check back frequently for new webinars and webinar recordings.

    -- Jarrod Guthrie, Product Marketing Manager, Amazon CloudFront

  • AWS Week in Review - December 8, 2014

    15 Dec 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, December 8
    Tuesday, December 9
    Wednesday, December 10
    Thursday, December 11
    Friday, December 12

    Here are some of the events that we have on tap for the next week or two (visit the AWS Events page for more):

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Data Retrieval Policies and Audit Logging for Amazon Glacier

    Amazon Glacier is a secure and durable storage service for data archiving and backup. You can store infrequently accessed data in Glacier for as little as $0.01 per gigabyte per month. When you need to retrieve your data, Glacier will make it available for download within 3 to 5 hours.

    Today we are launching two new features for Glacier. First, we are making it easier for you to manage data retrieval costs by introducing the data retrieval policies feature. Second, we are happy to announce that Glacier now supports audit logging with AWS CloudTrail. This pair of features should make Glacier even more attractive to customers who are leveraging Glacier as part of their archival solutions, where managing a predictable budget and the ability to create and examine audit trails are both very important.

    Data Retrieval Policies
    Glacier's new data retrieval policies will help you manage your data retrieval costs with just a few clicks in the AWS Management Console. As you may know, Glacier's free retrieval tier allows you to retrieve up to 5% of your monthly storage (pro-rated daily). This is best for smooth, incremental retrievals. With today's launch you now have three options:

    • Free Tier Only - You can retrieve up to 5% of your stored data per month. Retrieval requests above the daily free tier allowance will not be accepted. You will not incur data retrieval costs while this option is in effect. This is the default value for all newly created AWS accounts.
    • Max Retrieval Rate - You can cap the retrieval rate by specifying a gigabyte per hour limit in the AWS Management Console. With this setting, retrieval requests that would exceed the specified rate will not be accepted to ensure a data retrieval cost ceiling.
    • No Retrieval Limit - You can choose to not set any data retrieval limits in which case all valid retrieval requests will be accepted. With this setting, your data retrieval cost will vary based on your usage. This is the default value for existing Amazon Glacier customers.

    The options are chosen on a per-account, per-region basis and apply to all Glacier retrieval activities within the region. This is due to the fact that data retrieval costs vary by region and the free tier is also region-specific. Also note that the retrieval policies only govern retrieval requests issued directly against the Glacier service (on Glacier vaults) and do not govern Amazon S3 restore requests on data archived in the Glacier storage class via Amazon S3 lifecycle management.

    Here is how you can set up your data retrieval policies in the AWS Management Console:

    If you have chosen the “Free Tier Only” or “Max Retrieval Rate” policies, retrieval requests (or "retrieval jobs", to use Glacier's terminology) that would exceed the specified retrieval limit will not be accepted. Instead, they will return an error code with information about your retrieval policy. You can use this information to delay or spread out the retrievals. You can also choose to increase the Max Retrieval Rate to the appropriate level.

    We believe that this new feature will give you additional control over your data retrieval costs, and that it will make Glacier an even better fit for your archival storage needs. You may want to watch this AWS re:Invent video to learn more:

    Audit Logging With CloudTrail
    Glacier now supports audit logging with AWS CloudTrail. Once you have enabled CloudTrail for your account in a particular region, calls made to the Glacier APIs will be logged to Amazon Simple Storage Service (S3) and accessible to you from the AWS Management Console and third-party tools. The information provided to you in the log files will give you insight into the use of Glacier within your organization and should also help you to improve your organization's compliance and governance profile.

    Available Now
    Both of these features are available now and you can start using them today in any region which supports Glacier and CloudTrail, as appropriate.

    -- Jeff;

  • Earth Science on AWS with new CGIAR and Landsat Public Data Sets

    09 Dec 2014 in Public Data Sets | permalink

    To support the growing number of Earth science researchers on AWS, we are adding two new members to the collection of Public Data Sets on AWS: CGIAR Global Circulation Models (GCM) data and imagery from the Landsat 8 satellite.

    If you have been a regular reader of this blog, you may recall some of our earlier work to encourage Earth science research. For example, last year we announced that you can Process Earth Science Data on AWS With NASA / NEX Public Data Sets. Earlier this year we announced the Amazon Climate Research Grants and subsequently made 12 awards (I'll have more to say about the recipients and the results in a bit).

    Supporting Climate Research with CGIAR Data
    We are working with CGIAR (a consortium of international agricultural research centers) to make their data more accessible and more easily available, with the expectation that it will lead to innovative ways to address critical food security and development challenges. We expect worldwide public access to this data to help researchers address rural poverty, improve human health & nutrition, and manage the Earth's natural resources in a sustainable fashion. Earth's climate is changing. We believe that it is important to understand how the changes will affect agriculture and the world's ability to feed its ever-growing population. By making CGIAR's Global Circulation Models (GCM) available, we are giving researchers what is presently believed to be the most important tool for understanding how the climate could change in the next hundred years. Making this data available in the Cloud will allow developers to build applications that give non-experts the ability to access information about current and future climates in visual fashion.

    The GCM data comes from the CCAFS Climate Portal and is stored in Amazon S3 at s3://cgiardata (refer to the CCAFS-Climate Data page for more info). There's about 6 TB of data in the bucket, spread out over 66,000 or so files in ESRI Grid and ARC ASCII GRID format, all zipped. You can download the desired data to an EC2 instance using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell. The GCM Documentation contains additional information about the structure of the data. The following diagram (click for a larger copy) will help you to identify the files that you need:

    Coming in Early 2015 - Landsat Imagery
    Landsat (pictured at right, courtesy of NASA Earth Observatory) is a program managed by United States Geological Survey (USGS) that creates moderate-resolution satellite imagery of all land on Earth every 16 days. The Landsat program has been running since 1972 and is the longest ongoing project to collect such imagery. Because of Landsat's global purview and long history, it has become a reference point for all Earth observation work and is considered the gold standard of aerial imagery. It is the basis for research and applications in many global sectors, including agriculture, cartography, geology, forestry, regional planning, and Earth science education.

    In support of the White House's Climate Data Initiative, we have committed to make up to a petabyte of Landsat earth imagery data from the USGS widely available as an AWS Public Data Set. In early 2015, new imagery produced by the Landsat 8 satellite will be available for anyone to access via Amazon S3. By making Landsat data readily available near our flexible computing resources, we hope to accelerate innovation in climate research, humanitarian relief, and disaster preparedness efforts around the world. Because the imagery will be available in the cloud, researchers will be able to use whatever tools they want to perform analysis without needing to worry about storage or bandwidth costs. Take a look at the post Putting Landsat 8's Bands to Work on the MapBox Blog to see what can be done with this data.

    We are currently looking for partners who are interested in contributing expertise, open source tools, and educational materials that will help to accelerate climate research using Landsat on AWS. If you are interested in helping out or if you would like to be notified when this data becomes available, please go here and fill out the form.

    -- Jeff;

  • AWS OpsWorks Update - Support for Existing EC2 Instances and On-Premises Servers

    My colleague Chris Barclay sent a guest post to introduce two powerful new features for AWS OpsWorks.

    -- Jeff;


    New Modes for OpsWorks
    I have some good news for users that operate compute resources outside of AWS: you can now use AWS OpsWorks to deploy and operate applications on any server with an Internet connection including virtual machines running in your own data centers. Previously, you could only deploy and operate applications on Amazon EC2 instances created by OpsWorks. Now, OpsWorks can also manage existing EC2 instances created outside of OpsWorks. You may know that OpsWorks is a service that helps you automate tasks like code deployment, software configuration, operating system updates, database setup, and server scaling using Chef. OpsWorks gives you the flexibility to define your application architecture and resource configuration and handles the provisioning and management of resources for you. Click here to learn more about the benefits of OpsWorks.

    Customers with on-premises servers no longer need to operate separate application management tools or pay up-front licensing costs but can instead use OpsWorks to manage applications that run on-premises, on AWS, or that span environments. OpsWorks can configure any software that is scriptable and includes integration with AWS services such as Amazon CloudWatch.

    Use Cases & Benefits
    OpsWorks can enhance the management processes for your existing EC2 instances or on-premises servers. For example:

    • With a single command, OpsWorks can update operating systems and software packages to the latest version across your entire fleet, making it easy to keep up with security updates.
    • Instead of manually running commands on each instance/server in your fleet, let OpsWorks run scripts or Chef recipes for you. You control who can run scripts and are able to view a history of each script that has been run.
    • Instead of using one user login per instance/server, you can manage operating system users and ssh/sudo access. This makes it easier to add and remove an individual user's access to your instances.
    • Create alarms or scale instances/servers based on custom Amazon CloudWatch metrics for CPU, memory and load from one instance/server or aggregated across a collection of instances/servers.

    Getting Started
    Let's walk through process of registering existing on-premises or EC2 instances. Got to the OpsWorks Management Console and click Register Instances:

    Select whether you want to register EC2 instances or on-premises servers. You can use both types, but the wizard operates with one class at a time.

    Give your collection of instances a Name, select a Region, and optionally choose a VPC and IAM role. If you are registering EC2 instances, select from the table before proceeding to the next step.

    Install the AWS CLI on your desktop (if you have already installed an older version of the CLI, you may need to update it in order to use this feature).

    Run the command displayed in the Run Register Command section using the CLI installed in the previous step. This uses the CLI installed on your desktop to install the OpsWorks agent onto the selected instances. You will need the instance's ssh user name and private key in order to perform the installation. See the documentation if you want to run the CLI on the server you are registering. Once the registration process is complete, the instances will appear in the list as "registered."

    Click Done. You can now use OpsWorks to manage your instances! You can view and perform actions on your instances in the Instances view. Navigate to the Monitoring view to see the 13 included custom CloudWatch metrics for the instances you have registered.

    You can learn more about using OpsWorks to manage on-premises and EC2 instances by taking a look at the examples in the Application Management Blog or the documentation.

    Pricing and Availability
    OpsWorks costs $0.02 per hour for each on-premises server on which you install the agent, and is available at no additional charge for EC2 instances. See the OpsWorks Pricing page to learn more about our free tier and other pricing information.

    -- Chris Barclay, Principal Product Manager

  • AWS Week in Review - December 1, 2014

    08 Dec 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, December 1
    Tuesday, December 2
    Wednesday, December 3
    Thursday, December 4
    Friday, December 5

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;