AWS Blog

AWS Week in Review – August 22, 2016

by Jeff Barr | on | in Week in Review | | Comments

Here’s the first community-driven edition of the AWS Week in Review. In response to last week’s blog post (AWS Week in Review – Coming Back With Your Help!), 9 other contributors helped to make this post a reality. That’s a great start; let’s see if we can go for 20 this week.

Monday

August 22

Tuesday

August 23

Wednesday

August 24

Thursday

August 25

Friday

August 26

Sunday

August 28

New & Notable Open Source

New SlideShare Presentations

Upcoming Events

Help Wanted

Stay tuned for next week, and please consider helping to make this a community-driven effort!

Jeff;

Improvements to CloudWatch Logs & Dashboards

by Jeff Barr | on | in Amazon CloudWatch, Launch | | Comments

Amazon CloudWatch helps you to see, diagnose, react to, and resolve issues that arise in your AWS infrastructure and in the applications that you run on AWS. Today, I would like to talk about several usability and functionality improvements to CloudWatch Logs (Store and Monitor OS & Application Log Files with Amazon CloudWatch) and to CloudWatch Dashboards (CloudWatch Dashboards – Create & Use Customized Metrics Views).

Usability Improvements to CloudWatch Logs
CloudWatch Logs is a highly available, scalable, durable, and secure service to manage your operating system and application log files. It allows you to ingest, store, filter, search, and archive the logs, reducing your operational burden and allowing you to focus on your application and your business.

In order to help you to stay efficient and productive even as the number and size of your logs grows, we have made several usability improvements to the CloudWatch Logs Console:

  • Improved formatting for log data.
  • Simplified access to lengthy log files.
  • Easier searching within a log group.
  • Simplified collaboration around log files.
  • Better searching within a specific time frame.

Prior to today’s launch we also made some improvements to the CloudWatch Dashboards:

  • Full screen mode.
  • Dark theme.
  • Control over range of the Y axis on charts.
  • Simplified renaming of charts.
  • Persistent storage of chart settings.

CloudWatch Logs Console in Action
Let’s take a look at each of these improvements!

Open up the CloudWatch Logs Console, click on a Log Group, and then on a Log Stream within the group. Find the View options menu on the right:

Click on Expand all in order to see the log messages in expanded, multi-line form like this:

You can also Switch to text view in order to see the logs in their unadorned, plain-text form:

We have also improved the display of log data across all streams within a log group. Once you select a Log Group and click Search Events you can see the log data from all streams with that log group. For example, I can easily identify the Billed Duration for multiple invocations of a single Lambda function:

Even better, we have replaced the original paginated view with an infinite scroll bar. You can now scroll to your heart’s content through log files of any length:

You can now refine your search to a specific time frame or to a custom date range with a single click, like this:

If you are working as part of a team, you can now share the URL of your log analysis session. The URL includes the search parameters and filters, and includes a fragment that looks like this:

group=<log_group_name>_log;stream=<log_stream_name>;filter=<filter_parameter>;start=PT<time_frame>

These improvements to the CloudWatch Logs Console are available now and you can start using them today. To learn more, read Getting Started with CloudWatch Logs.

Recent Improvements to the CloudWatch Dashboards
You may have already noticed the improvements that we recently made to the CloudWatch Dashboards. First, there’s a new full screen mode for Dashboards, accessible by clicking on Enter full screen in the Actions menu:

Once you are in full screen mode, you can click on Dark to switch to the new, night-owl-friendly dark theme:

Here’s a simple Redis dashboard in full screen mode using the dark theme:

Sometimes you want to have more control over how a chart is displayed on your dashboard. As an example, outliers in your data may make your chart less readable, and you may want to keep the dashboard focused on a specific Y axis range. Here’s a chart where that’s the case; the outlier masks the trend that happened after the big spike:

To edit the Y axis, click on the tool selector and select Edit:

Choose Graph Options and then edit the values for the Y axis until you are satisfied with the appearance of the chart, then click on Update widget:

Here’s what the chart looks like after that:

Many of our customers wanted to be able to rename a chart without leaving the dashboard. You can now do that with a click (hover your mouse near the name and then click on the pencil):

Finally, CloudWatch now remembers the time range, timezone preference, refresh interval, and auto-refresh setting for each chart!

Amazon CloudWatch Partner Ecosystem
I’d like to wrap things up by sharing some of the great work that our partners are doing. The following partners are building value-added solutions on top of CloudWatch:

  • Datadog provides integrations to key items in your infrastructure, and gives you the ability to collaborate with your team directly when dealing with incidents.
  • Librato provides integrations across elements of your infrastructure, and supports composite metrics and mathematical transformations to time series data.
  • SignalFx helps provide you with instant visibility into your metrics, and focuses on data analytics and on delivering alerts on service-wide patterns.
  • Splunk offers a platform for operational intelligence that enables you to collect machine data and find insights.
  • Sumo Logic is a machine data analytics service for log management and time series metrics that helps you build, run and secure your applications.

If you are a partner and offer something that belongs on this list, let me know and I’ll update it ASAP!

Jeff;

 

 

New – Upload AWS Cost & Usage Reports to Redshift and QuickSight

by Jeff Barr | on | in Amazon QuickSight, Amazon Redshift, Big Data, Launch | | Comments

Many AWS customers have been asking us for a way to programmatically analyze their Cost and Usage Reports (read New – AWS Cost and Usage Reports for Comprehensive and Customizable Reporting for more info). These customers are often using AWS to run multiple lines of business, making use of a wide variety of services, often spread out across multiple regions. Because we provide very detailed billing and cost information, this is a Big Data problem and one that can be easily addressed using AWS services!

While I was on vacation earlier this month, we launched a new feature that allows you to upload your Cost and Usage reports to Amazon Redshift and Amazon QuickSight. Now that I am caught up, I’d like to tell you about this feature.

Upload to Redshift
I started by creating a new Redshift cluster (if you already have a running cluster, you need not create another one). Here’s my cluster:

Next, I verified that I had enabled the Billing Reports feature:

Then I hopped over to the Cost and Billing Reports and clicked on Create report:

Next, I named my report (MyReportRedshift), made it Hourly, and enabled support for both Redshift and QuickSight:

I wrapped things up by selecting my delivery options:

I confirmed my desire to create a report on the next page, and then clicked on Review and Complete. The report was created and I was informed that the first report would arrive in the bucket within 24 hours:

While I was waiting I installed PostgreSQL on my EC2 instance (sudo yum install postgresql94) and verified that I was signed up for the Amazon QuickSight preview. Also, following the directions in Create an IAM Role, I made a read-only IAM role and captured its ARN:

Back in the Redshift console, I clicked on Manage IAM Roles and associated the ARN with my Redshift cluster:

The next day, I verified that the files were arriving in my bucket as expected, and then returned to the console in order to retrieve a helper file so that I could access Redshift:

I clicked on Redshift file and then copied the SQL command:

I inserted the ARN and the S3 region name into the SQL (I had to add quotes around the region name in order to make the query work as expected):

And then I connected to Redshift using psql (I can use any visual or CLI-based SQL client):

$ psql -h jbcluster.XYZ.us-east-1.redshift.amazonaws.com \
  -U root -p 5439 -d dev

Then I ran the SQL command. It created a pair of tables and imported the billing data from S3.

Querying Data in Redshift
Using some queries supplied by my colleagues as a starting point, I summed up my S3 usage for the month:

And then I looked at my costs on a per-AZ basis:

And on a per-AZ, per-service basis:

Just for fun, I spent some time examining the Redshift Console. I was able to see all of my queries:

Analyzing Data with QuickSight
I also spent some time analyzing the cost and billing data using Amazon QuickSight. I signed in and clicked on Connect to another data source or upload a file:

Then I dug in to my S3 bucket (jbarr-bcm) and captured the URL of the manifest file (MyReportRedshift-RedshiftManifest.json):

I selected S3 as my data source and entered the URL:

QuickSight imported the data in a few seconds and the new data source was available. I loaded it into SPICE (QuickSight’s in-memory calculation engine). With three or four  more clicks I focused on the per-AZ data, and excluded the data that was not specific to an AZ:

Another click and I switched to a pie chart view:

I also examined the costs on a per-service basis:

As you can see, the new data and the analytical capabilities of QuickSight allow me (and you) to dive deep into your AWS costs in minutes.

Available Now
This new feature is available now and you can start using it today!

Jeff;

 

AWS Week in Review – Coming Back With Your Help!

by Jeff Barr | on | in Week in Review | | Comments

Back in 2012 I realized that something interesting happened in AWS-land just about every day. In contrast to the periodic bursts of activity that were the norm back in the days of shrink-wrapped software, the cloud became a place where steady, continuous development took place.

In order to share all of this activity with my readers and to better illustrate the pace of innovation, I published the first AWS Week in Review in the spring of 2012. The original post took all of about 5 minutes to assemble, post and format. I got some great feedback on it and I continued to produce a steady stream of new posts every week for over 4 years. Over the years I added more and more content generated within AWS and from the ever-growing community of fans, developers, and partners.

Unfortunately, finding, saving, and filtering links, and then generating these posts grew to take a substantial amount of time. I reluctantly stopped writing new posts early this year after spending about 4 hours on the post for the week of April 25th.

After receiving dozens of emails and tweets asking about the posts, I gave some thought to a new model that would be open and more scalable.

Going Open
The AWS Week in Review is now a GitHub project (https://github.com/aws/aws-week-in-review). I am inviting contributors (AWS fans, users, bloggers, and partners) to contribute.

Every Monday morning I will review and accept pull requests for the previous week, aiming to publish the Week in Review by 10 AM PT. In order to keep the posts focused and highly valuable, I will approve pull requests only if they meet our guidelines for style and content.

At that time I will also create a file for the week to come, so that you can populate it as you discover new and relevant content.

Content & Style Guidelines
Here are the guidelines for making contributions:

  • Relevance -All contributions must be directly related to AWS.
  • Ownership – All contributions remain the property of the contributor.
  • Validity – All links must be to publicly available content (links to free, gated content are fine).
  • Timeliness – All contributions must refer to content that was created on the associated date.
  • Neutrality – This is not the place for editorializing. Just the facts / links.

I generally stay away from generic news about the cloud business, and I post benchmarks only with the approval of my colleagues.

And now a word or two about style:

  • Content from this blog is generally prefixed with “I wrote about POST_TITLE” or “We announced that TOPIC.”
  • Content from other AWS blogs is styled as “The BLOG_NAME wrote about POST_TITLE.”
  • Content from individuals is styled as “PERSON wrote about POST_TITLE.”
  • Content from partners and ISVs is styled as “The BLOG_NAME wrote about POST_TITLE.”

There’s room for some innovation and variation to keep things interesting, but keep it clean and concise. Please feel free to review some of my older posts to get a sense for what works.

Over time we might want to create a more compelling visual design for the posts. Your ideas (and contributions) are welcome.

Sections
Over the years I created the following sections:

  • Daily Summaries – content from this blog, other AWS blogs, and everywhere else.
  • New & Notable Open Source.
  • New SlideShare Presentations.
  • New YouTube Videos including APN Success Stories.
  • New AWS Marketplace products.
  • New Customer Success Stories.
  • Upcoming Events.
  • Help Wanted.

Some of this content comes to my attention via RSS feeds. I will post the OPML file that I use in the GitHub repo and you can use it as a starting point. The New & Notable Open Source section is derived from a GitHub search for aws. I scroll through the results and pick the 10 or 15 items that catch my eye. I also watch /r/aws and Hacker News for interesting and relevant links and discussions.

Over time, it is possible that groups or individuals may become the primary contributor for a section. That’s fine, and I would be thrilled to see this happen. I am also open to the addition to new sections, as long as they are highly relevant to AWS.

Adding Content / Creating a Pull Request
It is very easy to participate in this process. You don’t need to use any shell commands or text editors. Start by creating a GitHub account and logging in. I set up two-factor authentication for my account and you might want to do the same.

Now, find a piece of relevant content. As an example, I’ll use the presentation Amazon Aurora for Enterprise Database Applications. I visit the current aws-week-in-review file and click on the Edit button (the pencil icon):

Then I insert the new content (line 81):

I could have inserted several pieces of new content if desired.

Next, I enter a simple commit message, indicate that the commit should go to a branch (this is important), and click on Propose file change.

And that’s it! In my role as owner of the file, I’ll see the pull request, review it, and then merge it in to the master branch.

Automation
Earlier this year I tried to automate the process, but I did not like the results. You are welcome to give this a shot on your own. I do want to make sure that we continue to exercise human judgement in order to keep the posts as valuable as possible.

Let’s Do It
I am super excited about this project and I cannot wait to see those pull requests coming in. Please let me know (via a blog comment) if you have any suggestions or concerns.

I should note up front that I am very new to Git-based collaboration and that this is going to be a learning exercise for me. Do not hesitate to let me know if there’s a better way to do things!

Jeff;

 

Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume

by Jeff Barr | on | in Amazon WorkSpaces, Launch | | Comments

In my recent post, I Love My Amazon WorkSpace, I shared the story of how I became a full-time user and big fan of Amazon WorkSpaces. Since writing the post I have heard similar sentiments from several other AWS customers.

Today I would like to tell you about some new and recent developments that will make WorkSpaces more economical, more flexible, and more useful:

  • Hourly WorkSpaces – You can now pay for your WorkSpace by the hour.
  • Expanded Root Volume – Newly launched WorkSpaces now have an 80 GB root volume.

Let’s take a closer look at these new features.

Hourly WorkSpaces
If you only need part-time access to your WorkSpace, you (or your organization, to be more precise) will benefit from this feature. In addition to the existing monthly billing, you can now use and pay for a WorkSpace on an hourly basis, allowing you to save money on your AWS bill. If you are a part-time employee, a road warrior, share your job with another part-timer, or work on multiple short-term projects, this feature is for you. It is also a great fit for corporate training, education, and remote administration.

There are now two running modes – AlwaysOn and AutoStop:

  • AlwaysOn – This is the existing mode. You have instant access to a WorkSpace that is always running, billed by the month.
  • AutoStop – This is new. Your WorkSpace starts running and billing when you log in, and stops automatically when you remain disconnected for a specified period of time.

A WorkSpace that is running in AutoStop mode will automatically stop a predetermined amount of time after you disconnect (1 to 48 hours). Your WorkSpaces Administrator can also force a running WorkSpace to stop. When you next connect, the WorkSpace will resume, with all open documents and running programs intact. Resuming a stopped WorkSpace generally takes less than 90 seconds.

Your WorkSpaces Administrator has the ability to choose your running mode when launching your WorkSpace:

WorkSpaces Configuration

The Administrator can change the AutoStop time and the running mode at any point during the month. They can also track the number of working hours that your WorkSpace accumulates during the month using the new UserConnected CloudWatch metric, and switch from AutoStop to AlwaysOn when this becomes more economical. Switching from hourly to monthly billing takes place upon request; however, switching the other way takes place at the the start of the following month.

All new Amazon WorkSpaces can take advantage of hourly billing today. If you’re using a custom image for your WorkSpaces, you’ll need to refresh your custom images from the latest Amazon WorkSpaces bundles. The ability for existing WorkSpaces to switch to hourly billing will be added in the future.

To learn more about pricing for hourly WorkSpaces, visit the WorkSpaces Pricing page.

Expanded Root Volume
By popular demand we have expanded the size of the root volume for newly launched WorkSpaces to 80 GB, allowing you to run more applications and store more data at no additional cost. Your WorkSpaces Administrator can rebuild existing WorkSpaces in order to upgrade them to the larger root volumes (read Rebuild a WorkSpace to learn more). Rebuilding a WorkSpace will restore the root volume (C:) to the most recent image of the bundle that was used to create the WorkSpace. It will also restore the data volume (D:) from the last automatic snapshot.

Some WorkSpaces Resources
While I have your attention, I would like to let you know about a couple of other important WorkSpaces resources:

Available Now
The features that I described above are available now and you can start using them today!

Jeff;

AWS Webinars – August, 2016

by Jeff Barr | on | in Webinars | | Comments

Everyone on the AWS team understands the value of educating our customers on the best ways to use our services. We work hard to create documentation, training materials, and blog posts for you! We run live events such as our Global AWS Summits and AWS re:Invent where the focus is on education. Last but not least, we put our heads together and create a fresh lineup of webinars for you each and every month.

We have a great selection of webinars on the schedule for August. As always they are free, but they do fill up and I strongly suggest that you register ahead of time. All times are PT, and each webinar runs for one hour:

August 23

August 24

August 25

August 30

August 31

Jeff;

PS – Check out the AWS Webinar Archive for more great content!

 

AWS Solution – Transit VPC

by Jeff Barr | on | in Amazon VPC, AWS Marketplace, Quick Start | | Comments

Today I would like to tell you about a new AWS Solution. This one is cool because of what it does and how it works! Like the AWS Quick Starts, this one was built by AWS Solutions Architects and incorporates best practices for security and high availability.

The new Transit VPC Solution shows you how to implement a very useful networking construct that we call a transit VPC. You can use this to connect multiple Virtual Private Clouds (VPCs) that might be geographically disparate and/or running in separate AWS accounts, to a common VPC that serves as a global network transit center. This network topology simplifies network management and minimizes the number of connections that you need to set up and manage. Even better, it is implemented virtually and does not require any physical network gear or a physical presence in a colocation transit hub. Here’s what this looks like:

In this diagram, the transit VPC is central, surrounded by additional “spoke” VPCs, corporate data centers, and other networks.

The transit VPC supports several important use cases:

  • Private Networking – You can build a private network that spans two or more AWS Regions.
  • Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
  • Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.

The solution uses a AWS CloudFormation stack to launch and configure all of the AWS resources. It provides you with three throughput options ranging from 500 Mbps to 2 Gbps, each implemented over a pair of connections for high availability. The stack makes use of the Cisco Cloud Services Router (CSR), which is now available in AWS Marketplace. You can use your existing CSR licenses (the BYOL model) or you can pay for your CSR usage on an hourly basis. The cost to run a transit VPC is based on the throughput option and licensing model that you choose, and ranges from $0.21 to $8.40 per hour, with an additional cost (for AWS resources) of $0.10 per hour for each spoke VPC. There’s an additional cost of $1 per month for a AWS Key Management Service (KMS) customer master key that is specific to the solution. All of these prices are exclusive of network transit costs.

The template installs and uses a pair of AWS Lambda functions in a creative way!

The VGW Poller function runs every minute. It scans all of the AWS Regions in the account, looking for appropriately tagged Virtual Private Gateways in spoke VPCs that do not have a VPN connection. When it finds one, it creates (if necessary) the corresponding customer gateway and the VPN connections to the CSR, and then saves the information in an S3 bucket.

The Cisco Configurator function is triggered by the Put event on the bucket. It parses the VPN connection information and generates the necessary config files, then pushes them to the CSR instances using SSH. This allows the VPN tunnels to come up and (via the magic of BGP), neighbor relationships will be established with the spoke VPCs.

By using Lambda in this way, new spoke VPCs can be brought online quickly without the overhead of keeping an underutilized EC2 instance up and running.

The solution’s implementation guide, as always, contains step-by-step directions and security recommendations.

Jeff;

PS – Check out additional network best practice guidance to find answers to common network questions!

AWS Snowball Update – Job Management API & S3 Adapter

by Jeff Barr | on | in AWS Import/Export, Launch | | Comments

We introduced AWS Import/Export Snowball last fall from the re:Invent stage. The Snowball appliance is designed for customers who need to transfer large amounts of data into or out of AWS on a one-time or recurring basis (read AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances to learn more).

Today we are launching two important additions to Snowball. Here’s the scoop:

  • Snowball Job Management API – The new Snowball API lets you build applications that create and manage Snowball jobs.
  • S3 Adapter – The new Snowball S3 Adapter lets you access a Snowball appliance as if it were an S3 endpoint.

Time to dive in!

Snowball Job Management API
The original Snowball model was interactive and console-driven. You could create a job (basically “Send me a Snowball”) and then monitor its progress, tracking the shipment, transit, delivery, and return to AWS visually. This was great for one-off jobs, but did not meet the needs of customers who wanted to integrate Snowball into their existing backup or data transfer model. Based on the requests that we received from these customers and from our Storage Partners, we are introducing a Snowball Job Management API today.

The Snowball Job Management API gives our customers and partners the power to make Snowball an intrinsic, integrated part of their data management solutions. Here are the primary functions:

  • CreateJob – Create an import or export job & initiates shipment of an appliance.
  • ListJobs – Fetch a list of jobs and associated job states.
  • DescribeJob – Fetch information about a specific job.

Read the API Reference to learn more!

I’m looking forward to reading about creative and innovative applications that make use of this new API! Leave me a comment and let me know what you come up with.

S3 Adapter
The new Snowball S3 Adapter allows you to access a Snowball as if it were an Amazon S3 endpoint running on-premises. This allows you to use your existing, S3-centric tools to move data to or from a Snowball.

The adapter is available for multiple Linux distributions and Windows releases, and is easy to install:

  1. Download the appropriate file from the Snowball Tools page and extract its contents to a local directory.
  2. Verify that the adapter’s configuration is appropriate for your environment (the adapter listens on port 8080 by default).
  3. Connect your Snowball to your network and get its IP address from the built-in display on the appliance.
  4. Visit the Snowball Console to obtain the unlock code and the job manifest.
  5. Launch the adapter, providing it with the IP address, unlock code, and manifest file.

With the adapter up and running, you can use your existing S3-centric tools by simply configuring them to use the local endpoint (the IP address of the on-premises host and the listener port). For example, here’s how you would run the s3 ls command on the on-premises host:

$ aws s3 ls --endpoint http://localhost:8080

After you copy your files to the Snowball, you can easily verify that the expected number of files were copied:

$ snowball validate

The initial release of the adapter supports a subset of the S3 API including GET on buckets and on the service, HEAD on a bucket and on objects, PUT and DELETE on objects, and all of the multipart upload operations. If you plan to access the adapter using your own code or third party tools, some testing is advisable.

To learn more, read about the Snowball Transfer Adapter.

Available Now
These new features are available now and you can start using them today!

Jeff;

 

New – Bring Your Own Keys with AWS Key Management Service

by Jeff Barr | on | in Key Management Service, Launch | | Comments

AWS Key Management Service (KMS) provides you with seamless, centralized control over your encryption keys. Our customers have told us that they love this fully managed service because it automatically handles all of the availability, scalability, physical security, and hardware maintenance for the underlying Key Management Infrastructure (KMI). It also centralizes key management, with one dashboard that offers creation, rotation, and lifecycle management functions. With no up-front cost and usage-based pricing that starts at $1 per Customer Master Key (CMK) per month, KMS makes it easy for you to encrypt data stored in S3, EBS, RDS, Redshift, and any other AWS service that’s integrated with KMS.

Many AWS customers use KMS to create and manage their keys. A few, however, would like to maintain local control over their keys while still taking advantage of the other features offered by KMS. Our customers tell us that local control over the generation and storage of keys would help them meet their security and compliance requirements in order to run their most sensitive workloads in the cloud.

Bring Your Own Keys
In order to support this important use case, I am happy to announce that you can now bring your own keys to KMS. This allows you to protect extremely sensitive workloads and to maintain a secure copy of the keys outside of AWS. This new feature allows you to import keys from any key management and HSM (Hardware Security Module) solution that supports the RSA PKCS #1 standard, and use them with AWS services and your own applications. It also works in concert with AWS CloudTrail to provide you with detailed auditing information. Putting it all together, you get greater control over the lifecycle and durability of your keys while you use AWS to provide high availability. Most key management solutions in use today use an HSM in the back end, but not all HSMs provide a key management solution.

The import process can be initiated from the AWS Management Console, AWS Command Line Interface (CLI), or by making calls to the KMS API. Because you never want to transmit secret keys in the open, the import process requires you to wrap the key in your KMI beforehand with a public key provided by KMS that is unique to your account. You can use the PKCS #1 scheme of your choice to wrap the key.

Following the directions (Importing Key Material in AWS Key Management Service), I started out by clicking on Create key in the KMS Console:

I entered an Alias and a Description, selected External, and checked the “I understand…” checkbox:

Then I picked the set of IAM users that have permission to use the KMS APIs to administer the key (this step applies to both KMS and External keys, as does the next one):

Then I picked the set of IAM users that can use the key to encrypt and decrypt data:

I verified the key policy, and then I downloaded my wrapping key and my import token. The wrapping key is the 2048-bit RSA public key that I’ll use to encrypt the 256-bit secret key I want to import into KMS. The import token contains metadata to ensure that my exported key can be imported into KMS correctly.

I opened up the ZIP file and put the wrapping key into a directory on my EC2 instance. Then I used the openssl command twice: once to generate my secret key and a second time to wrap the secret key with the wrapping key. Note that I used openssl as a convenient way to generate a 256-bit key and prepare it for import. For production data, you should use a more secure method (preferably a commercial key management or HSM solution) of generating and storing the local copy of your keys.

$ openssl rand -out plain_text_aes_key.bin 32
$ openssl rsautl -encrypt -in plain_text_aes_key.bin -oaep \
  -inkey wrappingKey_fcb572d3-6680-449c-91ab-ac3a5c07dc09_0804104355 \
  -pubin -keyform DER -out enc.aes.key

Finally, I brought it all together by checking “I am ready to upload…”  and clicking on Next, then specifying my key materials along with an expiration time for the key. Since the key will be unusable by AWS after the expiration date, you may want to choose the option where the key doesn’t expire until you better understand your requirements. You can always re-import the same key and reset the expiration time later.

I clicked on Finish and the key was Enabled and ready for me to use:

And that’s all I had to do!

Because I set an expiration date for the key, KMS automatically created a CloudWatch metric to track the remaining time until the key expires. I can create a CloudWatch Alarm for this metric as a reminder to re-import the key when it is about to expire. When the key expires, a CloudWatch Event will be generated; I can use this to take an action programmatically.

Available Now
This new feature is now available in AWS GovCloud (US) and all commercial AWS regions except for China (Beijing) and you can start using it today.

Jeff;

Now Available – IPv6 Support for Amazon S3

by Jeff Barr | on | in Amazon S3, Launch | | Comments

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:

http://BUCKET.s3.dualstack.REGION.amazonaws.com

or this:

http://s3.dualstack.REGION.amazonaws.com/BUCKET

If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell you can use the --enabledualstack flag to switch to the dual-stack endpoints.

We are currently updating the AWS SDKs to support the use_dualstack_endpoint setting and expect to push them out to production by the middle of next week. Until then, refer to the developer guide for your SDK to learn how to enable this feature.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.

Jeff;