AWS Official Blog

  • New SDKs, Code Samples, & Docs for Login and Pay with Amazon

    by Jeff Barr | on | in Developers, Login and Pay with Amazon |

    I met with the Amazon Payments developer relations team a couple of weeks ago in order to get an update on Login and Pay with Amazon (see my post, PeachDish – Login, Pay, Cook, and Eat With AWS, to learn more).

    The team has been working to make it even easier for you to add one-time checkout and recurring payments to your application. They have created new PHP, Python, and Ruby SDKs, coded up some helpful interactive and self-documenting samples, and refreshed the documentation.

    Let’s review the Login and Pay user experience before we dive in! The Pay with Amazon Simple Checkout Sample shows you how to enable a buyer to make a purchase. It runs in a sandbox and uses a set of test credentials so that you can walk through the payment process yourself without actually making a purchase.

    Your customer would simply click on the Pay with Amazon button to make the purchase:

    Then they log in to their Amazon account:

    This information is displayed in a widget; your code simply sets it up and arranges to handle a couple of events. The SDK will take care of everything else.

    Your customer can then complete the payment process by choosing the shipping address and the payment method, and clicking on the Place Order button:

    Your code will then confirm and authorize the order.

    New Samples
    The samples (Simple Checkout and Recurring Payments) are interactive and self-documenting. Each page displays the appropriate widget, and also includes the client and server code needed to set up the widget and to handle user actions. The server code is available in PHP, Python, and Ruby; you can choose the desired language with a click and the sample will change accordingly:

    Visit the Pay with Amazon SDK Samples page to see the full list of samples.

    New SDKs
    The sample code makes use of the new Login and Pay with Amazon SDKs. The SDKs are available in source code form on GitHub as follows:

    We are also working on SDKs for other popular languages.

    New Documentation
    The documentation has been updated, with an updated Reference Guide and new Integration Guides for Login and Pay with Amazon, Recurring Payments, and Express Integration. These updates are the first in a series; the team is working hard to create a great developer experience for their product!

    Talk to the Team
    The team would love to hear from you. You can contact them at pay-with-amazon-dev-feedback@amazon.com .

    Jeff;

     

  • Announcing the AWS Pop-up Loft in New York

    by Jeff Barr | on | in AWS Loft |

    We opened up the first AWS Pop-up Loft last year. By virtue of its location (Market Street in San Francisco) it is accessible to entrepreneurs, students, and others interested in learning more about AWS. Our customers have also used it for co-working and for meetings. Personally, I like to use the AWS Loft in San Francisco as my home-away-from-home when I travel.

    I’m happy to be able to announce that we will be popping up a second loft, this one in New York City. We have created a unique space and assembled a full calendar of events, with some special help from our friends at Intel and Chef. We look forward to connecting with even more customers and expect that the AWS Pop-up Loft will be a great place to learn, collaborate, and share.

    In the City
    This loft is located at 350 West Broadway in New York’s SOHO neighborhood and will open its doors on Wednesday, June 24th with a 7 PM opening party that you are welcome to attend!

    The Loft will be open Monday through Friday from 10 AM to 6 PM, with special events during the evening. During the day you will have access to the Ask an Architect Bar, daily AWS education sessions, Wi-fi, a co-working space, and snacks, all at no charge.

    Ask an Architect
    Step up to the Ask an Architect Bar with your code, architecture diagrams, and your AWS questions at the ready! You can book a session ahead of time or your can simply walk in.  You will have access to deep technical expertise and will be able to get guidance on AWS architecture, usage of specific AWS services and features, cost optimization, and more.

    AWS Education Sessions
    During the day, AWS Solution Architects, Product Managers, and Evangelists will be leading 60-minute educational sessions designed to help you to learn more about specific AWS services and use cases. You can attend these sessions to learn about Mobile & Gaming, Databases, Big Data, Compute & Networking, Architecture, Operations, Security, and more, all at no charge.

    The Chef Perspective
    Chef is an automation platform for configuring and deploying IT infrastructure and applications in the data center and the cloud. They will bring their DevOps perspective to New York through hosted sessions and training (see the calendar below for more information).

    The Intel Perspective
    AWS and Intel share a passion for innovation and a history of helping startups to succeed. Intel will bring their newest technologies to New York, with talks and training that focus on the Internet of Things and the latest Intel Xeon processors.

    On the Calendar
    Here are some of the events that we have scheduled for the first couple of months (the AWS and Chef Bootcamps run from 10 AM to 6 PM):

    • Thursday, June 25 – Chef Bootcamp (10 AM – 6 PM).
    • Thursday, June 25 – Oscar Health (6:30 PM).
    • Friday, June 26 – AWS Bootcamp (10 AM – 6 PM).
    • Monday, June 29 – Chartbeat (6:30 PM).
    • Tuesday, June 30 – Picking the Right Tool for the Job (HTML5 vs. Unity)  (Noon – 1 PM).
    • Tuesday, June 30 – So You Want to Build a Mobile Game? (1 PM – 4:30 PM).
    • Tuesday, June 30 – Buzzfeed (6:30 PM).
    • Monday, July 6 – AWS Bootcamp (10 AM – 6 PM).
    • Tuesday, July 7 – Dr. Werner Vogels (Amazon CTO) + Startup Founders (6:30 PM).
    • Tuesday, July 7 – AWS Bootcamp (10 AM – 6 PM).
    • Wednesday, July 8 – Sumo Logic Panel and Networking Event (6:30 PM).
    • Thursday, July 9- AWS Activate Social Event (7:00 PM – 10 PM).
    • Friday, July 10 – Getting Started with Amazon EMR (Noon – 1 PM).
    • Friday, July 10 – Amazon EMR Deep Dive (1 PM – 2 PM).
    • Friday, July 10 – How to Build ETL Workflows Using AWS Data Pipeline and EMR (2 – 3 PM).
    • Tuesday, July 14 – Chef Bootcamp (10 AM – 6 PM).
    • Wednesday, July 15 – Chef Bootcamp (10 AM – 6 PM).
    • Thursday, July 16 – Science Logic (11 AM – Noon).
    • Thursday, July 16 – Intel Lustre (4 PM – 5 PM).
    • Friday, July 17 – Chef Bootcamp (10 AM – 6 PM).
    • Wednesday, July 22 – Mashery (11 AM – 3 PM).
    • Thursday, July 23 – An Evening with Chef (6:30 PM).
    • Wednesday, July 29 – Evident.io (6:30 PM).
    • Wednesday, August 5 – Startup Pitch Event and Summer Social (6:30 PM).
    • Tuesday, August 25 – Eliot Horowitz, CTO and Co-Founder of MongoDB (6:30 PM).

    Stop in and Say Hello
    Please feel free to stop in and say hello to my colleagues at the loft if you happen to find yourself in SOHO. Or, plan ahead and RSVP to attend an event!

    Jeff;

  • Amazon EC2 Spot Fleet API – Manage Thousands of Spot Instances with one Request

    by Jeff Barr | on | in Amazon EC2 |

    It has been really interesting to watch Amazon Elastic Compute Cloud (EC2) evolve over the last eight or nine years. At first you could launch a single instance type, in one region, at a predetermined (On-Demand) price. Today, you can launch a plethora of instance types, in any one of ten regions (eleven including AWS GovCloud (US)), with your choice of On-Demand, Reserved, or Spot pricing (currently in nine regions). Along the way, we have added many features to EC2 and have also used it as a building block for other services including Amazon EMR, AWS Elastic Beanstalk, Amazon WorkSpaces, EC2 Container Service, and AWS Lambda.

    New Spot Fleet API
    Today we are making EC2’s Spot Instance model even more useful with the addition of a new API that allows you to launch and manage an entire fleet of Spot Instances with one request (a fleet is a collection of Spot Instances that are all working together as part of a distributed application. A fleet could be a batch processing job, a Hadoop workflow, or an HPC grid computing job). Many AWS customers launch fleets of Spot Instances (in sizes ranging from one instance up to thousands), using custom-written code that is responsible for discovering capacity, monitoring market prices across instance types and availability zones, and managing bids, all with the goal of running their workloads (ranging from large scale molecular dynamics simulations to continuous integration environments) at the lowest possible cost.

    With today’s launch, this custom code is no longer necessary! Instead, a single API function (RequestSpotFleet) does all of the work on your behalf. You simply specify the fleet’s target capacity, a bid price per hour, and tell Spot what instance types you would like to launch. Spot will find the lowest priced spare EC2 capacity available, and work to achieve and maintain the fleet’s target capacity. One call does it all, as they say…

    Making the Request
    You can have up to 1,000 active Spot fleets per region, with a per-fleet and a per-region limit of 3,000 instances (the usual EC2 per-account and per-region limits are still in effect and will govern the number of instances that you can launch, the number of Amazon Elastic Block Store (EBS) volumes that you can create, and so forth).

    Each request (via the API or the CLI) must include the following values:

    • Target Capacity – The number of EC2 instances that you want in your fleet.
    • Maximum Bid Price – The maximum bid price that you are willing to pay.
    • Launch Specifications – The quantities and types of instances that you would like to launch, and how you want them to be configured (AMI Id, VPC, subnets or availability zones, security groups, block device mappings, user data, and so forth). In general, launch specifications that do not target a particular subnet or availability zone are more economical.
    • IAM Fleet Role – The name of an IAM role. It must allow EC2 to terminate instances on your behalf.

    Each request can also include any or all of the following optional values:

    • Client Token – A unique, case-sensitive identifier for the request. You can use this to ensure idempotency for your Spot fleet requests.
    • Valid From -The start date and time of the request.
    • Valid Until – The end date and time of the request.
    • Terminate on Expiration – If set to TRUE, all Spot instances in the fleet will be terminated when the Valid Until time is reached. If set to FALSE (the default), running Spot instances will be left as-is, but no new ones will be launched.

    The RequestSpotFleet function will return a Spot Fleet Request Id if all goes well, or an error if the request is malformed. You will also receive an error if you ask for instance types that are not available in Spot form. You can use the Id to call other Spot fleet functions including DescribeSpotFleetRequests, DescribeSpotFleetInstances, DescribeSpotFleetRequestHistory, and CancelSpotFleetRequests (there are also command-line equivalents to each of these functions).

    Behind the Scenes
    Once your request has been accepted and the start date and time has been reached, EC2 will attempt to reach and then maintain the desired target capacity, even as Spot prices change. It will start by looking up the current Spot price for each launch specification in your request. Then it will launch Spot Instances using the launch specification(s) that result in the lowest price, until capacity, Spot limits, or bid price limits are reached. As instances in the fleet are terminated due to rising prices, replacement instances will be launched using whatever specification(s) result in the lowest price at that point in time.

    The request remains active until it expires or you cancel it. The Spot Instances in the fleet will remain running unless you indicated that you wanted them to be terminated. As I mentioned earlier, you need to include an IAM role so that EC2 can terminate instances that are running on your behalf.

    Things to Know
    As is often the case with new AWS features, this is an initial release and we have a healthy backlog of features in the queue. For example, we plan to add a weighting system.  It will allow you to express the relative power of each of your launch specifications in numeric form. The target capacity will also be expressed in these units; this will allow you to indicate that you need a certain amount of “horsepower” in a fleet.

    Each fleet is run within a particular AWS region. In the future we would like to support fleets that span two or more regions.

    Available Now
    You can launch Spot fleets today in all public AWS regions where Spot is available. There is no charge for the Spot fleet; you pay Spot prices for the EC2 instances that you launch and any other resources that they consume.

    Jeff;

     

  • Look Before You Leap – The Coming Leap Second and AWS

    by Jeff Barr | on | in Amazon CloudFront, Amazon CloudSearch, Amazon EC2, Amazon RDS, Amazon Redshift, AWS CloudTrail |

    My colleague Mingxue Zhao sent me a guest post designed to make sure that you are aware of an important time / clock issue.

    — Jeff;


    The International Earth Rotation and Reference Systems (IERS) recently announced that an extra second will be injected into civil time at the end of June 30th, 2015. This means that the last minute of June 30th, 2015 will have 61 seconds. If a clock is synchronized to the standard civil time, it will show an extra second 23:59:60 on that day between 23:59:59 and 00:00:00. This extra second is called a leap second. There have been 25 such leap seconds since 1972. The last one took place on June 30th, 2012.

    Not all applications and systems are properly coded to handle this “:60” notation. As a result, some applications or systems may malfunction and it is hard to predict which one will go wrong. To keep services stable, some organizations, including Amazon Web Services, plan to implement alternative solutions to avoid the “:60” leap second. This means that AWS clocks will be slightly different from the standard civil time for a short period of time.

    If you want to know whether your applications and systems can properly handle the leap second, contact your providers. If you run time-sensitive workloads and need to know how AWS clocks will behave, read this document carefully. In general, there are three affected parts:

    • The AWS Management Console and backend systems
    • Amazon EC2 instances
    • Other AWS managed resources

    For more information about comparing AWS clocks to UTC, see the AWS Adjusted Time section.

    AWS Management Console and Backend Systems
    The AWS Management Console and backend systems will NOT implement the leap second. Instead, we will spread the one extra second over a 24-hour period surrounding the leap second by making each second slightly longer. During these 24 hours, AWS clocks may be up to 0.5 second behind or ahead of the standard civil time (see the AWS Adjusted Time section for more information).

    You can see adjusted times in consoles (including resource creation timestamps), metering records, billing records, Amazon CloudFront logs, and AWS CloudTrail logs. You will not see a “:60” second in these places and your usage will be billed according to the adjusted time.

    Amazon EC2 Instances
    Each EC2 instance has its own clock and is fully under your control; AWS does not manage instance clocks. An instance clock can be affected by many factors. Depending on these factors, it may implement or skip the leap second. It may also be isolated and not synchronize to an external time system. If you need your EC2 instance clocks to be predictable, you can use NTP to synchronize your clocks to time servers of your choice. For more information about how to synchronize clocks, see the following documentation:

    Adding the leap second is currently the standard practice. If you use public time servers, like time servers from ntp.org (the default for Amazon Linux AMIs) or time.windows.com (the default for Amazon Windows AMIs), your instance will see the leap second unless these synchronization services announce a different practice.

    Other AWS Managed Resources
    Other AWS resources may also have their own clocks. Unlike EC2 instances, these resources are fully or partially managed by AWS.

    Clocks for the following resources synchronize to the time servers from ntp.org, which implements the standard leap second:

    • Amazon CloudSearch clusters
    • Amazon EC2 Container Service instances
    • Amazon RDS instances
    • Amazon Redshift instances

    AWS Adjusted Time
    This section provides specific details on how clocks will behave in the AWS Management Console and backend systems.

    Starting at 12:00:00 PM on June 30th, 2015, we will slow down AWS clocks by 1/86400. Every second on AWS clocks will take 1+1/86400 seconds of “real” time, until 12:00:00 PM on  July 1st, 2015, when AWS clocks will be behind by a full second. Meanwhile, the standard civil time (UTC) will implement the leap second at the end of June 30th, 2015 and fall behind by a full second, too. Therefore, at 12:00:00 PM July 1st, 2015, AWS clocks will be synchronized to UTC again. The table below illustrates these changes.

    UTC AWS Adjusted Clock AWS vs. UTC Notes
    11:59:59 AM June 30th, 2015 11:59:59 AM June 30th, 2015 +0 AWS clocks are synchronized to UTC.
    12:00:00 PM 12:00:00 PM +0
    12:00:01 Each second is 1/86400 longer and AWS clocks fall behind UTC. The gap gradually increases to up to 1/2 second.
    12:00:01 +1/86400
    12:00:02
    12:00:02 +2/86400
     …  …  …
    23:59:59
     23:59:59 +43199/86400
    23:59:60 Leap second injected to UTC.
    00:00:00 AM July 1st, 2015 -1/2 AWS clocks gain 1/2 second ahead of UTC.
    00:00:00 AM July 1st, 2015 AWS clocks keep falling behind and the gap with UTC shrinks gradually.
    00:00:01 -43199/86400
    00:00:01
    00:00:02 -43198/86400
     …  …  …
    11:59:59 AM -1/86400
    11:59:59 AM
    12:00:00 PM July 1st ,2015 12:00:00 PM July 1st ,2015 +0 The gap shrinks to zero. AWS clocks synchronize to UTC again.
    12:00:01 12:00:01 +0

    If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

    Mingxue Zhao, Senior Product Manager

  • AWS OpsWorks for Windows

    by Jeff Barr | on | in AWS OpsWorks, Windows |

    AWS OpsWorks gives you an integrated management experience that spans the entire life cycle of your application including resource provisioning, configuration management, application deployment, monitoring, and access control. As I noted in my introductory post (AWS OpsWorks – Flexible Application Management in the Cloud Using Chef), it works with applications of any level of complexity and is independent of any particular architectural pattern.

    We launched OpsWorks with support for EC2 instances running Linux. Late last  year we added support for on-premises servers, also running Linux. In-between, we also added support for Java, Amazon RDS , Amazon Simple Workflow, and more.

    Let’s review some OpsWorks terminology first! An OpsWorks Stack hosts one or more Applications. A Stack contains a set of Amazon Elastic Compute Cloud (EC2) instances and a set of blueprints (which OpsWorks calls Layers) for setting up the instances in the Stack. Each Stack can also contain references to one or more Chef Cookbooks.

    Support for Windows
    Today we are making OpsWorks even more useful by adding support for EC2 instances running Windows Server 2012 R2. These instances can be set up by using Custom layers. The Cookbooks associated with the layers can provision the instance, install packaged and custom software, and react to life cycle events. They can also run PowerShell scripts.

    Getting Started with Windows
    You can now specify Windows 2012 R2 as the default operating system when you create a new Stack. If you do this, you should also click on Advanced and choose version 12. of Chef, as follows:

    Now add a Custom Layer. If you select a security group that allows for inbound RDP access, you will be able to use a new OpsWorks feature that allows you to create temporary access credentials for the instances in the Layer:

    With the Stack and the Layer all set up, add an Instance to the Layer, and then start it:

    Connecting to a Windows Instance
    OpsWorks allows you to create IAM users, import them to OpsWorks, give them appropriate permissions, and log in to the instances with the credentials for the user (via RDP or SSH, as appropriate)! For example, you can create a user called winuser and allow it to be used for RDP access:

    In order to connect to the instance as winuser, you’ll need to first log in to the console with the appropriate user (as opposed to account) credentials. After you do this, you can request temporary access to the instance. If you have the appropriate permissions (Show and SSH/RDP), you can connect via RDP:

    OpsWorks will generate a temporary session for you:

    Then it will show you the credentials, and give you the option to download an RDP file:

    Use this file to connect, enter your password, and wait a couple of seconds to log in:

    And there’s your Windows server desktop:

    Available Now
    This new functionality is available now and you can start using it today! To learn more, read Getting Started with Windows Stacks in the OpsWorks User Guide.

    Jeff;

  • AWS Week in Review – May 11, 2015

    by Jeff Barr | on | in Week in Review |

    Let’s take a quick look at what happened in AWS-land last week:

    Monday, May 11
    Tuesday, May 12
    Wednesday, May 13
    Thursday, May 14
    Friday, May 15
    Saturday, May 16
    Sunday, May 17

    Upcoming Events

    Upcoming Events at the AWS Loft

    Help Wanted

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    Jeff;

  • Now Available – AWS Directory Service API & CLI (Bonus: CloudTrail Integration)

    by Jeff Barr | on | in AWS Directory Service |

    AWS Directory Service allows you to connect your AWS resources to an existing on-premises Active Directory or to set up a new, standalone directory in the AWS Cloud (see my post, New AWS Directory Service, to learn more).

    Until today, all operations on a Directory were initiated through the AWS Management Console. This was convenient, but was not ideal for integration with existing workflows.

    API & CLI
    Today we are making Directory Service even more useful by adding API and CLI (Command-Line Interface) support. You can now create and delete directories, computer accounts, and aliases (alternate names for the directory). You can create snapshot backups for standalone directories, and you can manage sign-on modes (Radius and SSO).

    You can use an IAM policy to grant permission to perform the API actions.

    Let’s take a look at some sample requests and responses, starting with a call to CreateDirectory. Here is the request:

    {"Name": "corp.snackers.org",
     "ShortName": "corp",
     "Password": "Westbay@123",
     "Description": "corp",
     "Size": "Large",
     "VpcSettings": 
      {"VpcId": "vpc-c3dd04a2",
       "SubnetIds": 
        ["subnet-9add04fb", "subnet-66dc0507"]
      }
    }
    

    And here is the response:

    {"DirectoryId": "d-90673058d7"}
    
    

    Here’s a call to DescribeDirectories, with three Directory Ids as arguments:

    {"DirectoryIds": 
      ["d-9067315087", "d-9067312ba4", "d-906731a3f3"]
    }
    

    The response is fairly long; it starts like this:

    {"DirectoryDescriptions": 
      [
        {"AccessUrl": "d-9067312ba4.dev.awsapps.com", 
         "Alias": "d-9067312ba4", 
         "DirectoryId": "d-9067312ba4", 
         "DnsIpAddrs": 
          ["172.16.1.130", "172.16.0.87"], 
         "LaunchTime": 1430175177.892, 
         "Name": "Eastbay.snackers.org", 
         "ShortName": "Eastbay", 
         "Size": "Large", 
         "SsoEnabled": false, 
         "Stage": "Active", 
         "StageLastUpdatedDateTime": 1430175333.603, 
         "Type": "SimpleAD", 
         "VpcSettings": 
          {"AvailabilityZones": 
            ["us-east-1a", "us-east-1e"], 
           "SubnetIds": 
            ["subnet-9add04fb", "subnet-13773d29"], 
           "VpcId": "vpc-c3dd04a2"}
        }, 
        ...
    

    CloudTrail Integration
    Directory Service API actions (via an SDK, the Console, or the CLI) can now be recorded via AWS CloudTrail.

    Learn More
    To learn more, read the new AWS Directory Service API Developer Guide. You can download the AWS SDKs and the AWS Command Line Interface (CLI) to get started.

    Jeff;

  • Now Available – AWS Mobile SDK for Unity

    by Jeff Barr | on | in Developers, Mobile |

    Unity is a popular cross-platform game development environment. You can build your 2D or 3D game once using C# and then run it on many different platforms and devices.

    Now Available
    Today we are launching the AWS Mobile SDK for Unity. This new SDK comes in the form of Unity packages that contain .NET classes. These classes allow games written with Unity to call multiple AWS services including Amazon Cognito (Identity and Sync), Amazon Simple Storage Service (S3), Amazon DynamoDB, and Amazon Mobile Analytics. You can use this SDK to build Unity games that run on iOS and Android devices.

    We launched the developer preview of the SDK late last year. During the preview we received some great feedback from developers.  We added access to the Amazon Mobile Analytics and also made lots of other enhancements (see the post Improvements in the AWS Mobile SDK for Unity for more information) based on this feedback.

    Your games can use these services as follows (these links lead to relevant documentation to help you to get started):

    • Cognito Identity to allow guest access, with an easy and  seamless transition to authenticated access via a public login provider such as Amazon, Facebook, Twitter, Google, or any provider that is compatible with OpenID Connect.
    • Cognito Sync to store user preferences and games state in the Amazon Cognito sync store, to enable a consistent experience across a user’s devices.
    • S3 to store and retrieve game assets (including images and videos) and other data.
    • DynamoDB to store and retrieve JSON documents and information identified by a key, with single-digit millisecond latency at any scale.
    • Mobile Analytics to collect and analyze usage data. You could, for example, use custom events to track the popularity of different features or locations within your game and use the results to tune and optimize game play.

    Available Now
    The AWS Mobile SDK for Unity is available now and you can download it today. Source code is available on GitHub, as is some sample code. To get started, open up the AWS Mobile SDK for Unity Developer Guide.

    Jeff;

  • Useful New IAM Feature – Access Key Last Use Info

    by Jeff Barr | on | in Identity and Access Management |

    Best practices for IT security are often easier to define than to implement. We are doing our best to define best practices at the AWS Security Center, backed up with services and security features that are as easy to implement as possible.

    In this vein, I want to make sure that you are fully aware of a useful new AWS Identity and Access Management (IAM) feature. We launched it last month with a detailed post on the AWS Security Blog. I am following up with this post just in case you missed the first one!

    As you probably know, you need to use an access key when you make a call to the AWS API. You also need to use the key when you drive AWS through the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell. You can have two access keys for each AWS account and for each IAM user within the account. Each key can be active or inactive (if you call an AWS API using inactive key, the call will fail).

    In order to maintain proper security hygiene, we strongly recommend that you rotate your keys on a regular basis. This is a simple matter of exposure: older keys are more likely to have been mishandled. Although we advise against the use of account keys, the recommendation to rotate them still applies!

    While the specifics will vary based on your application, you will generally follow these general steps to rotate a set of account or user keys:

    1. Delete the inactive key, if any.
    2. Create a new key.
    3. Update all code and configuration files to use the new key.
    4. Make sure that no applications or scripts are still using the previous key.
    5. Deactivate the previous key.

    The feature described in the post on the Security Blog was designed to help you with step 4, by telling you when a particular access key was last used. This will allow you to deactivate the previous key (step 5) with the confidence that it is no longer in use. It also allows you to identify the IAM users who have access but are no longer using it.

    You can access this feature through the IAM API (GetAccessKeyLastUsed) or from the IAM Console:

    As you can see, it is time for me to follow my own advice and to rotate my keys! It also looks like I have never bothered to use one of my keys. The information provided by the API and the Console also includes the last service and the last region; this will help you to locate any legacy code that is still using outdated keys.

    To learn more about this feature, read New in IAM: Quickly Identify When an Access Key Was Last Used in the AWS Security Blog.

    Jeff;

  • AWS Educate – Credits, Training, Content, and Collaboration for Students & Educators

    by Jeff Barr | on | in Education |

    We want to do our part to help educational institutions all over the world train and graduate students who are ready, willing, and able to power the cloud-powered world of tomorrow! The AWS Educate initiative will provide students and educators with four important resources:

    • Grants of AWS credits for use in courses and projects.
    • Free content to embed in courses or to use as-is.
    • Access to free and discounted AWS Training resources.
    • Online and in-person collaboration and networking opportunities.

    Membership in AWS Educate is open to AWS-approved, accredited educational institutions, educators, and students.  Simply click here to apply!

    School administrators (generally faculty or a member of the IT department) can sign up for AWS Educate and appoint themselves or another individual within the organization as the Central Point of Contact. This person is responsible for providing a list of valid domains that can be used to identify current students and faculty. They may also be asked to help verify student and/or educator participation in the program.

    Benefits for Educators
    All educators are eligible for the following benefits:

    • Free access to the AWS Educate Educator Collaboration Portal.
    • Free access to AWS Essentials eLearning.
    • 50% discount on instructor-led training.
    • 50% discount on AWS Certification.
    • Access to free Online Labs.

    Institutions that have joined AWS Educate are eligible for a grant of $200 in AWS credits per educator. If the institution has not joined, each educator is eligible for a grant of $75 in AWS credits (all amounts are in USD).

    Benefits for Students
    All students have access to the AWS Educate Student Portal and free Online Labs, along with a grant of USD $35 in AWS credits. If the student is a member of an organization that has joined AWS Educate, they are eligible for a grant of $100 in AWS credits.

    AWS Educate on Campus
    We are planning to conduct large-scale training, hackathons, and other on-campus events at some of the organizations that join AWS Educate.  The AWS Evangelists will also be opening up time on their calendars to visit and speak to events at these organizations.

    Apply Today
    As I noted above, this is a world-wide program; students and educators from all over the world are more than welcome to apply to the program today.

    Jeff;