AWS Official Blog
Let’s take a quick look at what happened in AWS-land last week:
- Thursday, May 7 -Webinar – NOAA Big Data Project: AWS Data Alliance Information Session.
- Tuesday, May 19 – Webinar – Industry Trends and Best Practices for Cloud Adoption.
- Tuesday, May 19 – Webinar – Getting Started: AWS Services Overview.
- Tuesday, May 19 – Webinar – Getting Started: Storage with Amazon S3 and Amazon Glacier.
- Wednesday, May 20 – Webinar – Getting Started with Amazon Web Services.
- Wednesday, May 20 – Webinar – Deep Dive: Infrastructure as Code.
- Wednesday, May 20 – Webinar – Getting Started with Amazon EMR.
- Thursday, May 21 – Webinar – Deep Dive: Scaling Up to Your First 10 Million Users.
- Thursday, May 21 – Webinar – Streaming Data Processing with Amazon Kinesis and AWS Lambda.
- Thursday, May 21 – Webinar – Deep Dive: Amazon Virtual Private Cloud.
- Friday, May 22 – Webinar – Best Practices: Control Access Authentication and Authorization with AWS IAM.
- Tuesday, May 26 – Live Event – AWS User Group Norway – Designing for Elasticity on AWS.
- AWS Summits.
Upcoming Events at the AWS Loft
- Monday, May 4 – May the Fourth Be With You – Loft Movie Day: The Empire Strikes Back.
- Friday, May 8 – AWS Pop-up Loft Ping-Pong Tournament.
- Monday, May 11 -Advanced AWS Bootcamp – Taking AWS to the Next Level.
- Tuesday, May 12 -Chef Bootcamp – A Taste of Chef on AWS.
- Tuesday, May 12 -Behind the Scenes with Offerletter.io – How Not to Lose a Million Dollars -An Engineers Guide to negotiating Salary and Equity.
- Wednesday, May 13 -A Taste of Chef on AWS.
- Thursday, May 14 – Chef Ask an Architect.
- Thursday, May 14 – Internet of Things Loft Talks.
- Friday, May 15 – Chef Bootcamp – A Taste of Chef on AWS.
- Monday, May 18 – AWS Bootcamp – Architecting Highly Available Applications on AWS.
- Tuesday, May 19 – AWS Lambda Bootcamp.
- Friday, May 22 -AWS Pop-up Loft Hack Series.
- Thursday, May 28 – Build Your Own Website: Making Web Development Fun and Easy.
We have added another AWS Quick Start Reference Deployment. The new SAP Business One, Version for SAP HANA document will show you how to get on the fast track to plan, deploy, and configure this enterprise resource planning (ERP) solution. It is powered by SAP HANA, SAP’s in-memory database.
This deployment builds on our existing SAP HANA on AWS Quick Start. It makes use of Amazon Elastic Compute Cloud (EC2) and Amazon Virtual Private Cloud, and is launched via a AWS CloudFormation template.
The CloudFormation template creates the following resources, all within a new or existing VPC:
- A NAT instance in the public subnet to support inbound SSH access and outbound Internet access.
- A Microsoft Windows Server instance in the public subnet for downloading SAP HANA media and to provide a remote desktop connection to the SAP Business One client instance.
- Security groups and IAM roles.
- A SAP HANA system installed with Amazon Elastic Block Store (EBS) volumes configured to meet HANA’s performance requirements.
- SAP Business One, version for SAP HANA, client and server components.
The document will help you to choose the appropriate EC2 instance types for both production and non-production scenarios. It also includes a comprehensive, step-by-step walk-through of the entire setup process. During the process, you will need to log in to the Windows instance using an RDP client in order to download and stage the SAP media.
After you make your choices, you simply launch the template, fill in the blanks, and sit back while the resources are created and configured. Exclusive of the media download (a manual step), this process will take about 90 minutes.
The quick start reference guide is available now and you can read it today!
The guest blog post below was written by Jaehyun Wie while he was a developer intern on the AWS Elastic Beanstalk Team. It shows you how to run your Docker apps locally using the Elastic Beanstalk Command Line Interface (CLI).
The Elastic Beanstalk command line interface (EB CLI) makes it easier for developers, working with command line tools, to get started with Elastic Beanstalk. Last November, we released a revamped version of the EB CLI that added a number of new commands and made it even simpler to get started. Today, we’ve added new commands to run your app locally.
In this post, we will walk through a simple example of using the new local commands. The remainder of this post will assume that you have the EB CLI v3.3 and Docker 1.6.0 installed. If you do not have the EB CLI installed, see Install the EB CLI using pip (Windows, Linux, OS X or Unix). To install Docker see Docker installation instructions. Before going any further, make sure that the
dockercommand is on your PATH. If you are using boot2docker, make sure that the boot2docker VM is up and running.
Creating the App
To begin, we will create an app that can be run on an Elastic Beanstalk platform that is preconfigured for Docker:
$ git clone https://github.com/awslabs/eb-python-flask.git $ cd eb-python-flask $ eb init -p "python-3.4-(preconfigured-docker)"
Running the App Locally
In order to run our app locally, all we need to do is use the local run command:
$ eb local run
This command will do everything required to run the Flask app in a Docker container. The terminal will hang while your app is running, and you can kill it at any time by using CTRL+C. By default, the container will listen on port 8080. To run your app on a different port, you can use the –port option:
$ eb local run --port 5000
You can also pass in environment variables at runtime if your app depends on them:
$ eb local run --envvars FLASK_DEBUG=true,APP_VERSION=v1.2.0
Opening Your App in a Browser
Now that your app is running, you can open it in a browser. Open a new terminal and run the following command:
$ eb local open
Viewing Status and Getting Application Logs
You can view the status of your local application like this:
$ eb local status
This will display output that looks like this:
Platform: 64bit Debian jessie v1.2.1 running Python 3.4 (Preconfigured - Docker) Container name: fdc00101ed4ebf79a5119bb67bf59f56618ce1da Container ip: 127.0.0.1 Container running: True Exposed host port(s): 8080 Full local URL(s): 127.0.0.1:8080
The local command also maps logs to your current directory so that you can access your application logs. To see where the logs are stored:
$ eb local logs
Each invocation of local run creates a new sub-directory for logs, so your logs will never be overwritten.
To learn more, read Running a Docker Environment Locally with the EB CLI and eb local command documentation.
— Jaehyun Wie, SDE Intern
I have become a devoted user of Amazon WorkDocs. I draft my blog posts (including this one) and then use WorkDocs to route them to the appropriate people and teams for review and translation. On an average day I probably upload new versions of 4 or 5 draft blog posts and review and respond to feedback on a similar number.
Today we are making WorkDocs even more useful by introducing additional sharing and ownership options for folder and documents. Let’s take a look at these new features!
Folder Link Sharing
You can now share a WorkDocs folder by creating and then sending a link. You can share read only or read & write access to the folder. To share a folder using a link, select the folder and then click on Share Link:
Then choose the type of access that you would like to share:
Copy the resulting link and send it to the lucky recipients. If you share read only access, the link recipient can only read the contents of the folder. If you share read & write access, the recipient can read the contents of the folder, provide feedback on the contents, and upload new versions of any of the documents or folders within. To learn more, read Creating a Shared Link in the WorkDocs Web Client Help.
Sharing with Groups
You can now share individual documents and entire folders with Active Directory (AD) groups. This will share the items with all of the members of group. You can do this by entering the name of the group when you share the item:
You can now make your colleagues and collaborators into co-owners of your documents and files. A co-owner can rename and delete documents and folders, and can also re-share them. Here’s now I would share my draft posts with Werner:
These features are available now and you can start using them today!
The skills and techniques needed to create, store, process, and manage data sets that start in the hundreds of gigabytes and grow to multiple terabyte size are all too rare. It is time to change that!
We have teamed up with the Square Kilometre Array (SKA) to create the new AstroCompute in the Cloud grant program in order to address these “to infinity and beyond” sorts of problems in order to ensure that mature, high-quality data management and processing solutions are in place by the time the SKA starts to pump out data in 2020 or so.
What’s the SKA?
I should start with a quick introduction to the SKA. It is funded by 11 world governments—with more planning to join—and will have a physical footprint in South Africa and Australia. The site in South Africa will be home to a set of high and mid frequency dish antennas (200 to start, with plans to grow to 2000 over time, spread across eight other African countries). The site in Australia will host 125,000 low frequency dipole antennas at the start, growing to a total of one million by the late 2020’s. These antennas will allow astronomers to monitor the sky in unprecedented detail and to run whole-sky surveys far faster than any system currently in existence. The goal is to address—and hopefully tackle—some of the most fundamental questions about our universe.
All of this raw data (exabytes per day at full scale) will be distilled down to a far more manageable level (exabytes per year) for storage and analysis. The distillation (filtering, calibration, geometric transformation, and more; see the SKA’s Software and Computing page for more information) is the big challenge that we want to address with these grants.
The next step is the target of the grants program that I am sharing with you today. The AWS Scientific Computing group, along with our friends at the SKA, want to make sure that the rest of the world is ready to process this astronomical (sorry) amount of data when it starts to become available in 2020.
To this end, we are providing grants (AWS credits) and up to one petabyte of storage for an AWS Public Data Set. The data set will be initially provided by several of the SKA’s precursor telescopes including CSIRO’s ASKAP, the MWA in Australia, and KAT-7 (pathfinder to the SKA precursor telescope Meerkat) in South Africa. These telescopes have already seen first light and are now producing data. Over time, the data set will grow to the full petabyte using data provided by the other SKA partners. The grants are open to anyone who is making use of radio astronomical telescopes or radio astronomical data resources around the world.
The grants will be administered by the SKA. They will be looking for innovating, cloud-based algorithms and tools that will be able to handle and process this never ending data stream. You can also read their post on Seeing Stars Through the Cloud.
If you meet the basic qualifications listed above, have an interest in working on this problem, and would like to apply for a grant, please visit the Call for Proposals page.
Amazon Glacier provides secure and durable data storage at extremely low cost (as little as $0.01 per gigabyte per month). Each item stored in Glacier is known as an archive, and can be as large as 40 terabytes. Archives are stored in vaults, each of which can store as many archives as desired.
Today we are giving you a new way to manage access to individual vaults within your AWS account. You can now define a vault access policy and use the policy to grant access to individual users, business groups, and to external business partners. Using a single access policy to control access to a vault can be simpler than using individual user and group IAM policies in many cases. For instance, you can easily write a vault access policy that denies all delete requests on your vault to protect critical data from accidental deletion. Using the vault access policy in this scenario is simpler than configuring multiple IAM policies for users and groups.
You can set up vault access policies from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or by making calls to the Glacier API. You can create one access policy for each vault; it can allow or deny access to individual API functions made by particular users or groups. It can also enable cross-account access, allowing you to share a vault with other AWS accounts.
From the AWS Management Console
Here’s how you can set up a policy using the console. Start by opening the console and selecting the desired vault. You will see the new Permissions tab at the bottom:
Click on Edit Policy Document and Add Permission. Set up a policy that denies all delete requests like this:
Click on Add Permission, Save the policy, and close the window:
Thousands of customers use Amazon DynamoDB to build popular applications for Gaming (Battle Camp), Mobile (The Simpsons Tapped Out), Ad-tech (AdRoll), Internet-of-Things (Earth Networks) and Modern Web applications (SmugMug).
We have made some improvements to DynamoDB in order to make it more powerful and easier to use. Here’s what’s new:
- You can now add, edit, and retrieve native JSON documents in the AWS Management Console.
- You can now use a friendly key condition expression to filter the data returned from a query by specifying a logical condition that must be met by the hash or hash range keys for the table.
Let’s take a closer look!
Native JSON Editing
As you may know, DynamoDB already has support for storage, display, and editing of JSON documents (see my previous post, Amazon DynamoDB Update – JSON, Expanded Free Tier, Flexible Scaling, Larger Items if this is news to you). You can store entire JSON-formatted documents (each up to 400 KB) as single DynamoDB items. This support is implemented within the AWS SDKs and lets you use DynamoDB as a full-fledged document store (a very common use case).
You already have the ability to add, edit, and display JSON documents in the console in DynamoDB’s internal format. Here’s what this looks like:
Today we are adding support for adding, editing, and display documents in native JSON format. Here’s what the data from the example above looks like in this format:
You can work with the data in DynamoDB format by clicking on DynamoDB JSON. You can enter (or paste) JSON directly when you are creating a new item:
You can also view and edit the same information in structured form:
Key Condition Expressions
You already have the ability to specify a key condition when you call DynamoDB’s Query function. If you do not specify a condition, all of the items that match the given hash key will be returned. If you specify a condition, only items that meet the criteria that it specifies will be returned. For example, you could choose to retrieve all customers in Zip Code 98074 (the hash key) that have a last name that begins with “Ba.”
With today’s release, we are adding support for a new and easier to use expression-style syntax for the key conditions. You can now use the following expression to specify the query that I described in the preceding paragraph:
zip_code = "98074" and begins_with(last_name, "Ba")
The expression can include Boolean operators (
=, <, <=, >, >=), range tests (
BETWEEN/AND), and prefix tests (
begins_with). You can specify a key condition (the
KeyConditionparameter) or a key condition expression (the
KeyConditionExpressionparameter) on a given call to the
Queryfunction, but you cannot specify both. We recommend the use of expressions for new applications. To learn more, read about Key Condition Expressions in the DynamoDB API Reference.
These features are available now and you can start using them today!
Let’s take a quick look at what happened in AWS-land last week:
- Tuesday, April 28 – Webinar – Getting Started with Amazon EC2 Container Service.
- Tuesday, April 28 – Webinar – Security Best Practice: Compliance beyond the Checkbox.
- Tuesday, April 28 – Webinar – Deploying and Managing Applications in Amazon WorkSpaces.
- Wednesday, April 29 – Webinar – Amazon Elastic File System (EFS): Scalable, Shared File Storage for EC2.
- Wednesday, April 29 – Webinar – Getting Started with AWS CodeDeploy.
- Wednesday, April 29 – Webinar – Securely Deliver high resolution content with AWS and Wowza.
- Wednesday, April 29 – Webinar – The AdRoll Monitoring Evolution: From Flying Blind to Flying by Instrument – with APN Partner Datadog and customer AdRoll.
- Thursday, April 30 – Webinar -Introduction to Amazon Machine Learning.
- Thursday, April 30 – Webinar -AWS Lambda: Event-driven Code for Devices and the Cloud.
- Thursday, April 30 – Webinar -Easily Build and Scale Mobile Apps with AWS Mobile Services.
- AWS Summits.
- AWS re:Invent.
Enterprise IT architects and system administrators often ask me how to go about moving their existing compute infrastructure to AWS. Invariably, they have spent a long time creating and polishing their existing system configurations and are hoping to take advantage of this work when they migrate to the cloud.
We introduced VM Import quite some time ago in order to address this aspect of the migration process. Since then, many AWS customers have used it as part of their migration, backup, and disaster recovery workflows.
Today we are improving VM Import by adding new
ImportSnapshotfunctions to the API. These new functions are faster and more flexible than the existing
ImportInstancefunction and should be used for all new applications. Here’s a quick comparison of the benefits of ImportImage with respect to ImportInstance:
ImportInstance ImportImage Source S3 manifest + objects
(usually uploaded from an on-premises image file)
Image file in S3 or an EBS Snapshot Destination Stopped EC2 instance Amazon Machine Image (AMI) VM Complexity Single volume, single disk Multiple volume, multiple disk Concurrent Imports 5 20 Operating Systems Windows Server 2003, Windows Server 2008, Windows Server 2012, Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, Debian. VM Formats VMDK, VHD, RAW VMDK, VHD, RAW, OVA
Because ImportImage and ImportSnapshot use an image file in S3 as a starting point, you now have several choices when it comes to moving your images to the cloud. You can use the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or custom tools built around the S3 API (be sure to take advantage of multipart uploads if you do this). You can also use AWS Import/Export to transfer your images using physical devices.
The image file that you provide to ImportImage will typically be an OVA package, but other formats are also supported. The file contains images of one or more disks, a manifest file, certificates, and other data associated with the image.
As noted in the table above ImportImage accepts image files that contain multiple disks and/or multiple disk volumes. This makes it a better match for the complex storage configurations that are often a necessity within an enterprise-scale environment.
ImportImage generates an AMI that can be launched as many times as needed. This is simpler, more flexible, and easier to work with than the stopped instance built by ImportInstance. ImportSnapshot generates an EBS Snapshot that can be used to create an EBS volume.
Behind the scenes, ImportImage and ImportSnapshot are able to distribute the processing and storage operations of each import operation across multiple EC2 instances. This optimization speeds up the import process and also makes it easier for you to predict how long it will take to import an image of a given size.
In addition to building your own import programs that make use of ImportImage and ImportSnapshot (by way of the AWS SDK for Java and the AWS SDK for .NET), you can also access this new functionality from the AWS Command Line Interface (CLI).
This new functions are available now in all AWS regions except Beijing (China) and AWS GovCloud (US).
My colleagues are already hard at work on the services, presentations, sessions, signage, clothing, and entertainment for AWS re:Invent 2015. As I announced last month, re:Invent will be taking place from October 6 to 9 this year at The Venetian in Las Vegas.
Registration will open at 9:00 AM Pacific Time on May 12th, just a couple of weeks from now! The registration fee will be $1299.
Here’s what you can do to prepare for re:Invent today:
- Flag the May 12th registration day on your calendar.
- Block off October 6-9, 2015 on your calendar.
- Reserve your hotel room.
- Make your travel plans.
- Sign up for email updates.
I am looking forward to this event, and have already blocked off a ton of time on my calendar for launch blogging.
See you in Vegas!