AWS Blog

AWS Organizations – Policy-Based Management for Multiple AWS Accounts

by Jeff Barr | on | in AWS Organizations, Launch | | Comments

Over the years I have found that many of our customers are managing multiple AWS accounts. This situation can arise for several reasons. Sometimes they adopt AWS incrementally and organically, with individual teams and divisions making the move to cloud computing on a decentralized basis. Other companies grow through mergers and acquisitions and take on responsibility for existing accounts. Still others routinely create multiple accounts in order to meet strict guidelines for compliance or to create a very strong isolation barrier between applications, sometimes going so far as to use distinct accounts for development, testing, and production.

As these accounts proliferate, our customers find that they would like to manage them in a scalable fashion. Instead of dealing with a multitude of per-team, per-division, or per-application accounts, they have asked for a way to define access control policies that can be easily applied to all, some, or individual accounts. In many cases, these customers are also interested in additional billing and cost management, and would like to be able to control how AWS pricing benefits such as volume discounts and Reserved Instances are applied to their accounts.

AWS Organizations Emerges from Preview
To support this increasingly important use case, we are moving AWS Organizations from Preview to General Availability today. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of Organizational Units (OUs), assign each account to an OU, define policies, and then apply them to the entire hierarchy, to select OUs, or to specific accounts. You can invite existing AWS accounts to join your organization and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.

Here are some important terms and concepts that will help you to understand Organizations (this assumes that you are the all-powerful, overall administrator of your organization’s AWS accounts, and that you are responsible for the Master account):

An Organization is a consolidated set of AWS accounts that you manage. Newly-created Organizations offer the ability to implement sophisticated, account-level controls such as Service Control Policies. This allows Organization administrators to manage lists of allowed and blocked AWS API functions and resources that place guard rails on individual accounts. For example, you could give your advanced R&D team access to a wide range of AWS services, and then be a bit more cautious with your mainstream development and test accounts. Or, on the production side, you could allow access only to AWS services that are eligible for HIPAA compliance.

Some of our existing customers use a feature of AWS called Consolidated Billing. This allows them to select a Payer Account which rolls up account activity from multiple AWS Accounts into a single invoice and provides a centralized way of tracking costs. With this launch, current Consolidated Billing customers now have an Organization that provides all the capabilities of Consolidated Billing, but by default does not have the new features (like Service Control Policies) we’re making available today. These customers can easily enable the full features of AWS Organizations. This is accomplished by first enabling the use of all AWS Organization features from the Organization’s master account and then having each member account authorize this change to the Organization. Finally, we will continue to support creating new Organizations that support only the Consolidated Billing capabilities. Customers that wish to only use the centralized billing features can continue to do so, without allowing the master account administrators to enforce the advanced policy controls on member accounts in the Organization.

An AWS account is a container for AWS resources.

The Master account is the management hub for the Organization and is also the payer account for all of the AWS accounts in the Organization. The Master account can invite existing accounts to join the Organization, and can also create new accounts.

Member accounts are the non-Master accounts in the Organization.

An Organizational Unit (OU) is a container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be up to five levels deep. The top of the hierarchy of OUs is also known as the Administrative Root.

A Service Control Policy (SCP) is a set of controls that the Organization’s Master account can apply to the Organization, selected OUs, or to selected accounts. When applied to an OU, the SCP applies to the OU and to any other OUs beneath it in the hierarchy. The SCP or SCPs in effect for a member account specify the permissions that are granted to the root user for the account. Within the account, IAM users and roles can be used as usual. However, regardless of how permissive the user or the role might be, the effective set of permissions will never extend beyond what is defined in the SCP. You can use this to exercise fine-grained control over access to AWS services and API functions at the account level.

An Invitation is used to ask an AWS account to join an Organization. It must be accepted within 15 days, and can be extended via email address or account ID. Up to 20 Invitations can be outstanding at any given time. The invitation-based model allows you to start from a Master account and then bring existing accounts into the fold. When an Invitation is accepted, the account joins the Organization and all applicable policies become effective. Once the account has joined the Organization, you can move it to the proper OU.

AWS Organizations is appropriate when you want to create strong isolation boundaries between the AWS accounts that you manage. However, keep in mind that AWS resources (EC2 instances, S3 buckets, and so forth) exist within a particular AWS account and cannot be moved from one account to another. You do have access to many different cross-account AWS features including VPC peering, AMI sharing, EBS snapshot sharing, RDS snapshot sharing, cross-account email sending, delegated access via IAM roles, cross-account S3 bucket permissions, and cross-acount access in the AWS Management Console.

Like consolidated billing, AWS Organizations also provides several benefits when it comes to the use of EC2 and RDS Reserved Instances. For billing purposes, all of the accounts in the Organization are treated as if they are one account and can receive the hourly cost benefit of an RI purchased by any other account in the same Organization (in order for this benefit to be applied as expected, the Availability Zone and other attributes of the RI must match the attributes of the EC2 or RDS instance).

Creating an Organization
Let’s create an Organization from the Console, create some Organizational Units, and then create some accounts. I start by clicking on Create organization:

Then I choose ENABLE ALL FEATURES and click on Create organization:

My Organization is ready in seconds:

I can create a new account by clicking on Add account, and then selecting Create account:

Then I supply the details (the IAM role is created in the new account and grants enough permissions for the account to be customized after creation):

Here’s what the console looks like after I have created Dev, Test, and Prod accounts:

At this point all of the accounts are at the top of the hierarchy:

In order to add some structure, I click on Organize accounts, select Create organizational unit (OU), and enter a name:

I do the same for a second OU:

Then I select the Prod account, click on Move accounts, and choose the Operations OU:

Next, I move the Dev and Test accounts into the Development OU:

At this point I have four accounts (my original one plus the three that I just created) and two OUs. The next step is to create one or more Service Control Policies by clicking on Policies and selecting Create policy. I can use the Policy Generator or I can copy an existing SCP and then customize it. I’ll use the Policy Generator. I give my policy a name and make it an Allow policy:

Then I use the Policy Generator to construct a policy that allows full access to EC2 and S3, and the ability to run (invoke) Lambda functions:

Remember, that this policy defines the full set of allowable actions within the account. In order to allow IAM users within the account to be able to use these actions, I would still need to create suitable IAM policies and attach them to the users (all within the member account). I click on Create policy and my policy is ready:

Then I create a second policy for development and testing. This one also allows access to AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline:

Let’s recap. I have created my accounts and placed them into OUs. I have created a policy for the OUs. Now I need to enable the use of policies, and attach the policy to the OUs. To enable the use of policies, I click on Organize accounts and select Home (this is not the same as the root because Organizations was designed to support multiple, independent hierarchies), and then click on the checkbox in the Root OU. Then I look to the right, expand the Details section, and click on Enable:

Ok, now I can put all of the pieces together! I click on the Root OU to descend in to the hierarchy, and then click on the checkbox in the Operations OU. Then I expand the Control Policies on the right and click on Attach policy:

Then I locate the OperationsPolicy and click on Attach:

Finally, I remove the FullAWSAccess policy:

I can also attach the DevTestPolicy to the Development OU.

All of the operations that I described above could have been initiated from the AWS Command Line Interface (CLI) or by making calls to functions such as CreateOrganization, CreateAccount, CreateOrganizationalUnit, MoveAccount, CreatePolicy, AttachPolicy, and InviteAccountToOrganization. To see the CLI in action, read Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts.

Best Practices for Use of AWS Organizations
Before I wrap up, I would like to share some best practices for the use of AWS Organizations:

Master Account – We recommend that you keep the Master Account free of any operational AWS resources (with one exception). In addition to making it easier for you to make high-quality control decision, this practice will make it easier for you to understand the charges on your AWS bill.

CloudTrail – Use AWS CloudTrail (this is the exception) in the Master Account to centrally track all AWS usage in the Member accounts.

Least Privilege – When setting up policies for your OUs, assign as few privileges as possible.

Organizational Units – Assign policies to OUs rather than to accounts. This will allow you to maintain a better mapping between your organizational structure and the level of AWS access needed.

Testing – Test new and modified policies on a single account before scaling up.

Automation – Use the APIs and a AWS CloudFormation template to ensure that every newly created account is configured to your liking. The template can create IAM users, roles, and policies. It can also set up logging, create and configure VPCs, and so forth.

Learning More
Here are some resources that will help you to get started with AWS Organizations:

Things to Know
AWS Organizations is available today in all AWS regions except China (Beijing) and AWS GovCloud (US) and is available to you at no charge (to be a bit more precise, the service endpoint is located in US East (Northern Virginia) and the SCPs apply across all relevant regions). All of the accounts must be from the same seller; you cannot mix AWS and AISPL (the local legal Indian entity that acts as a reseller for AWS services accounts in India) in the same Organization.

We have big plans for Organizations, and are currently thinking about adding support for multiple payers, control over allocation of Reserved Instance discounts, multiple hierarchies, and other control policies. As always, your feedback and suggestions are welcome.

Jeff;

New – Manage DynamoDB Items Using Time to Live (TTL)

by Jeff Barr | on | in Amazon DynamoDB | | Comments

AWS customers are making great use of Amazon DynamoDB. They love the speed and flexibility and build Ad Tech (reference architecture), Gaming (reference architecture), IoT (reference architecture), and other applications that take advantage of the consistent, single-digit millisecond latency. They also love the fact that DynamoDB is a managed, serverless database that scales to handle millions of requests per second to tables that are many terabytes in size.

Many DynamoDB users store data that has a limited useful life or is accessed less frequently over time. Some of them track recent logins, trial subscriptions, or application metrics. Others store data that is subject to regulatory or contractual limitations on how long it can be stored. Until now, these customers implemented their own time-based data management. At scale, this sometimes meant that they ran a couple of Amazon Elastic Compute Cloud (EC2) instances that did nothing more than scan DynamoDB items, check date attributes, and issue delete requests for items that were no longer needed. This added cost and complexity to their application.

New Time to Live (TTL) Management
In order to streamline this popular and important use case, we are launching a new Time to Live (TTL) feature today. You can enable this feature on a table-by-table basis, specifying an item attribute that contains the expiration time for the item.

Once the attribute has been specified and TTL management has been enabled (a single API call takes care of both operations), DynamoDB will find and delete items that have expired. This processing takes place automatically and in the background and does not affect read or write traffic to the table.

You can use DynamoDB streams (see Sneak Preview – DynamoDB Streams for more info) to process or archive the actual deletions. Like other update records in a stream, the deletions are available on a rolling 24-hour basis. You can move the expired items to cold storage, log them, or update other tables using AWS Lambda and DynamoDB Triggers.

Here’s how you enable TTL for a table and specify the desired attribute:

The attribute must be in DynamoDB’s Number data type, and is interpreted as seconds per the Unix Epoch time system.

As you can see from the screen shot above, you can also enable DynamoDB Streams, and you can look at a preview of the items that will be deleted when you enable TTL.

You can also call the UpdateTimeToLive function from your code, or you can use the update-time-to-live command from the AWS Command Line Interface (CLI).

TTL at TUNE
AWS customer TUNE is already making good use of this feature as part of their HasOffers product.

HasOffers-Dashboard-Phone

HasOffers helps customers to analyze the effectiveness of their marketing campaigns, storing massive amounts of ad engagement data in the process. Once the customer-defined time window for the campaign has passed, the data is no longer needed and can be deleted. Before we made the TTL feature available to TUNE, they manually identified and then deleted the stale data. This was labor and compute-intensive, and also consumed some of the provisioned throughput for the table.

Now, they simply set an expiration time for each item and leave the rest to DynamoDB. The stale data disappears automatically, with no impact on the available throughput. As a result, TUNE has been able to purge 85 terabytes of stale data and has reduced their costs by over $200K per year, while also simplifying their application logic.

Things to Know
Here are a couple of things to keep in mind as you are thinking about putting TTL to use in your application.

TTL Attribute – The TTL attribute can be indexed or projected, but it cannot be an element of a JSON document. As I indicated earlier, it must have the Number data type. You can use IAM to regulate access to this attribute, just as you can do for any other one. Items that do not have the designated TTL attribute will not be considered for deletion. In order to avoid a possible accidental deletion due to a malformed TTL value, items that appear to be older than 5 years will not be deleted.

Tables – You can apply a TTL to a new or an existing table. The process of enabling TTL for a table can take up to an hour, and you can only make one change per table at a time.

Background Processing – The scans and the deletions take place in the background and do not count against the provisioned throughput. Deletion times will vary based on the number and nature of the expired items. After the expiration but before the actual deletion, the items remain in the table and will appear in reads and scans.

Indexes – Items are removed from any Local Secondary Indexes immediately, and from Global Secondary Indexes in the usual eventually consistent fashion.

Pricing – There is no charge for the internal scan operation or for the deletion. You will pay for storage until the item is actually deleted.

Available Now
This feature is available now and you can start using it today! To learn more, read about Time to Live in the DynamoDB Developer Guide.

Jeff;

 

Now Available – I3 Instances for Demanding, I/O Intensive Applications

by Jeff Barr | on | in Amazon EC2, Launch | | Comments

On the first day of AWS re:Invent I published an EC2 Instance Update and promised to share additional information with you as soon as I had it.

Today I am happy to be able to let you know that we are making six sizes of our new I3 instances available in fifteen AWS regions! Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches. When compared to the I2 instances, I3 instances deliver storage that is less expensive and more dense, with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

The Specs
Here are the instance sizes and the associated specs:

Instance Name vCPU Count Memory
Instance Storage (NVMe SSD) Price/Hour
i3.large 2 15.25 GiB 0.475 TB $0.15
i3.xlarge 4 30.5 GiB 0.950 TB $0.31
i3.2xlarge 8 61 GiB 1.9 TB $0.62
i3.4xlarge 16 122 GiB 3.8 TB (2 disks) $1.25
i3.8xlarge 32 244 GiB 7.6 TB (4 disks) $2.50
i3.16xlarge 64 488 GiB 15.2 TB (8 disks) $4.99

The prices shown are for On-Demand instances in the US East (Northern Virginia) Region; see the EC2 pricing page for more information.

I3 instances are available in On-Demand, Reserved, and Spot form in the US East (Northern Virginia), US West (Oregon), US West (Northern California), US East (Ohio), Canada (Central), South America (São Paulo), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), and AWS GovCloud (US) Regions. You can also use them as Dedicated Hosts and as Dedicated Instances.

These instances support Hardware Virtualization (HVM) AMIs only, and must be run within a Virtual Private Cloud. In order to benefit from the performance made possible by the NVMe storage, you must run one of the following operating systems:

  • Amazon Linux AMI
  • RHEL – 6.5 or better
  • CentOS – 7.0 or better
  • Ubuntu – 16.04 or 16.10
  • SUSE 12
  • SUSE 11 with SP3
  • Windows Server 2008 R2, 2012 R2, and 2016

The I3 instances offer up to 8 NVMe SSDs. In order to achieve the best possible throughput and to get as many IOPS as possible, you can stripe multiple volumes together, or spread the I/O workload across them in another way.

Each vCPU (Virtual CPU) is a hardware hyperthread on an Intel E5-2686 v4 (Broadwell) processor running at 2.3 GHz. The processor supports the AVX2 instructions, along with Turbo Boost and NUMA.

Go For Launch
The I3 instances are available today in fifteen AWS regions and you can start to use them right now.

Jeff;

 

Launch: AWS Elastic Beanstalk launches support for Custom Platforms

by Tara Walker | on | in AWS Elastic Beanstalk, Launch | | Comments

There is excitement in the air! I am thrilled to announce that customers can now create custom platforms in AWS Elastic Beanstalk. With this latest release of the AWS Elastic Beanstalk service, developers and systems admins can now create and manage their own custom Elastic Beanstalk platform images allowing complete control over the instance configuration. As you know, AWS Elastic Beanstalk is a service for deploying and scaling web applications and services on common web platforms. With the service, you upload your code and it automatically handles the deployment, capacity provisioning, load balancing, and auto-scaling.

Previously, AWS Elastic Beanstalk provided a set of pre-configured platforms of multiple configurations using various programming languages, Docker containers, and/or web containers of each aforementioned type. Elastic Beanstalk would take the selected configuration and provision the software stack and resources needed to run the targeted application on one or more Amazon EC2 instances. With this latest release, there is now a choice to create a platform from your own customized Amazon Machine Image (AMI). The custom image can be built from one of the supported operating systems of Ubuntu, RHEL, or Amazon Linux. In order to simplify the creation of these specialized Elastic Beanstalk platforms, machine images are now created using the Packer tool. Packer is an open source tool that runs on all major operating systems, used for creating machine and container images for multiple platforms from a single configuration.

Custom platforms allow you to manage and enforce standardization and best practices across your Elastic Beanstalk environments. For example, you can now create your own platforms on Ubuntu or Red Hat Enterprise and customize your instances with languages/frameworks currently not supported by Elastic Beanstalk e.g. Rust, Sinatra etc.

Creating a Custom Platform

In order to create your custom platform, you start with a Packer template. After the Packer template is created, you would create platform definition file, a platform.yaml file, platform hooks, which will define the builder type for the platform, and script files,. With these files in hand, you would create a zip archive file, called a platform definition archive, to package the files, associated scripts and/or additional items needed to build your Amazon Machine Image (AMI).  A sample of a basic folder structure for building a platform definition archive looks as follows:

|– builder Contains files used by Packer to create the custom platform
|– custom_platform.json Packer template
|– platform.yaml Platform definition file
|– ReadMe.txt Describes the sample

The best way to take a deeper look into the new custom platform feature of Elastic Beanstalk is to put the feature to the test and try to build a custom AMI and platform using Packer. To start the journey, I am going to build a custom Packer template. I go to the Packer site, and download the Packer tool and ensured that the binary is in my environment path.

Now let’s build the template. The Packer template is the configuration file in JSON format, used to define the image we want to build.   I will open up Visual Studio and use this as the IDE to create a new JSON file to build my Packer template.

The Packer template format has a set of keys designed for the configuration of various components of the image. The keys are:

  • variables (optional): one or more key/value strings defining user variables
  • builders (required): array that defines the builders used to create machine images and configuration of each
  • provisioners (optional): array defining provisioners to be used to install and configure software for the machine image
  • description (optional): string providing a description of template
  • min_packer_version (optional): string of minimum Packer version that is required to parse the template.
  • post-processors (optional): array defining post-processing steps to take once image build is completed

If you want a great example of the Packer template that can be used to create a custom image used for a custom Elastic Beanstalk platform, the Elastic Beanstalk documentation has samples of valid Packer templates for your review.

In the template, I will add a provisioner to run a build script to install Node with information about the script location and the command(s) needed to execute the script. My completed JSON file, tara-ebcustom-platform.json, looks as follows:

Now that I have my template built, I will validate the template with Packer on the command line.

 

What is cool is that my Packer template fails because, in the template, I specify a script, eb_builder.sh, that is located in a builder folder. However, I have not created the builder folder nor shell script noted in my Packer template. A little confused that I am happy that my file failed? I believe that this is great news because I can catch errors in my template and/or missing files needed to build my machine image before uploading it to the Elastic Beanstalk service. Now I will fix these errors by building the folder and file for the builder script.

Using the sample of the scripts provided in the Elastic Beanstalk documentation, I build my Dev folder with the structure noted above. Within the context of Elastic Beanstalk custom platform creation, the aforementioned scripts used from the sample are called platform hooks. Platform Hooks are run during lifecycle events and in response to management operations.

An example of the builder script used in my custom platform implementation is shown below:

My builder folder structure holds the builder script, platform hooks, and other scripts, referred to as platform scripts, used to build the custom platform. Platform scripts are the shell scripts that you can use to get environment variables and other information in platform hooks. The platform hooks are located in a subfolder of my builder folder and follows the structure shown below:

All of these items; Packer template, platform.yaml, builder script, platform hooks, setup, config files and platform scripts make up the platform definition contained in my builder folder you see below.

I will leverage the platform.yaml provided in the sample .yaml file and change it as appropriate for my Elastic Beanstalk custom platform implementation. The result is following completed platform.yaml file:

version: "1.0"

provisioner:
  type: packer
  template: tara-ebcustom-platform.json
  flavor: amazon

metadata:
  maintainer: TaraW
  description: Tara Sample NodeJs Container.
  operating_system_name: Amazon linux
  operating_system_version: 2016.09.1
  programming_language_name: ECMAScript
  programming_language_version: ECMA-262
  framework_name: NodeJs
  framework_version: 4.4.4
  app_server_name: "none"
  app_server_version: "none"

option_definitions:
  - namespace: "aws:elasticbeanstalk:container:custom:application"
    option_name: "NPM_START"
    description: "Default application startup command"
    default_value: "node application.js"

Now, I will validate my Packer template again on the command line.

 

All that is left for me is to create the platform using the EB CLI. This functionality is available with EB CLI version 3.10.0 or later. You can install the EB CLI from here and follow the instructions for installation in the Elastic Beanstalk developer guide.

To use the EB CLI to create a custom platform, I would select the folder containing the files extracted from the platform definition archive. Within the context of that folder, I need perform the following steps:

  1. Use the EB CLI to initialize the platform repository and follow the prompts
    • eb platform init or ebp init
  2. Launch the Packer environment with the template and scripts
    • eb platform create or ebp create
  3. Validate an IAM role was successfully created for the instance. This instance profile role will be automatically created via the EB create process.
    • aws-elasticbeanstalk-custom-platform-ec2-role
  4. Verify status of platform creation
    • eb platform status or ebp status

I will now go to the Command Line and use EB CLI command to initialize the platform by running the eb platform init command.

Next step is to create the custom platform using the EB CLI, so I’ll run the shortened command, ebp create, in my platform folder.

Success! A custom Elastic Beanstalk platform has been created and we can deploy this platform for our web solution. It is important to remember that when you create a custom platform, you launch a single instance environment without an EIP that runs Packer, and additionally you can reuse this environment for multiple platforms, as well as, multiple versions of each platform. Additionally, custom platforms are region-specific, therefore, you must create your platforms separately in each region if you use Elastic Beanstalk in multiple regions.

Deploying Custom Platforms

With the custom platform now created, you can deploy an application either via the AWS CLI or via the AWS Elastic Beanstalk Console. The ability to create an environment with an already created custom platform is only available for the new environment wizard.

You can select an already created custom platform on the Create a new environment web page by selecting the Custom Platform radio option under Platform. You would then select the custom platform you previously created from the list of available custom platforms.

Additionally, the EB CLI can be used to deploy the latest version of your custom platform. Using the command line to deploy the previously created custom platform would look as follows:

  • eb deploy -p tara-ebcustom-platform

Summary

You can get started building your own custom platforms for Elastic Beanstalk today. To learn more about Elastic Beanstalk or custom platforms by going the AWS Elastic Beanstalk product page or the Elastic Beanstalk developer guide.

 

Tara

 

 

AWS Marketplace Adds Healthcare & Life Sciences Category

by Ana Visneski | on | in AWS Marketplace, Launch | | Comments

Wilson To and Luis Daniel Soto are our guest bloggers today, telling you about a new industry vertical category that is being added to the AWS Marketplace.Check it out!

-Ana


AWS Marketplace is a managed and curated software catalog that helps customers innovate faster and reduce costs, by making it easy to discover, evaluate, procure, immediately deploy and manage 3rd party software solutions.  To continue supporting our customers, we’re now adding a new industry vertical category: Healthcare & Life Sciences.

healthpost

This new category brings together best-of-breed software tools and solutions from our growing vendor ecosystem that have been adapted to, or built from the ground up, to serve the healthcare and life sciences industry.

Healthcare
Within the AWS Marketplace HCLS category, you can find solutions for Clinical information systems, population health and analytics, health administration and compliance services. Some offerings include:

  1. Allgress GetCompliant HIPAA Edition – Reduce the cost of compliance management and adherence by providing compliance professionals improved efficiency by automating the management of their compliance processes around HIPAA.
  2. ZH Healthcare BlueEHS – Deploy a customizable, ONC-certified EHR that empowers doctors to define their clinical workflows and treatment plans to enhance patient outcomes.
  3. Dicom Systems DCMSYS CloudVNA – DCMSYS Vendor Neutral Archive offers a cost-effective means of consolidating disparate imaging systems into a single repository, while providing enterprise-wide access and archiving of all medical images and other medical records.

Life Sciences

  1. National Instruments LabVIEW – Graphical system design software that provides scientists and engineers with the tools needed to create and deploy measurement and control systems through simple yet powerful networks.
  2. NCBI Blast – Analysis tools and datasets that allow users to perform flexible sequence similarity searches.
  3. Acellera AceCloud – Innovative tools and technologies for the study of biophysical phenomena. Acellera leverages the power of AWS Cloud to enable molecular dynamics simulations.

Healthcare and life sciences companies deal with huge amounts of data, and many of their data sets are some of the most complex in the world. From physicians and nurses to researchers and analysts, these users are typically hampered by their current systems. Their legacy software cannot let them efficiently store or effectively make use of the immense amounts of data they work with. And protracted and complex software purchasing cycles keep them from innovating at speed to stay ahead of market and industry trends. Data analytics and business intelligence solutions in AWS Marketplace offer specialized support for these industries, including:

  • Tableau Server – Enable teams to visualize across costs, needs, and outcomes at once to make the most of resources. The solution helps hospitals identify the impact of evidence-based medicine, wellness programs, and patient engagement.
  • TIBCO Spotfire and JasperSoft. TIBCO provides technical teams powerful data visualization, data analytics, and predictive analytics for Amazon Redshift, Amazon RDS, and popular database sources via AWS Marketplace.
  • Qlik Sense Enterprise. Qlik enables healthcare organizations to explore clinical, financial and operational data through visual analytics to discover insights which lead to improvements in care, reduced costs and delivering higher value to patients.

With more than 5,000 listings across more than 35 categories, AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements, choose pricing options, and automate the deployment of software and associated AWS resources with just a few clicks. AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis.

With AWS Marketplace, we can help drive operational efficiencies and reduce costs in these ways:

  • Easily bring in new solutions to solve increasingly complex issues, gain quick insight into the huge amounts of data users handle.
  • Healthcare data will be more actionable. We offer pay-as-you-go solutions that make it considerably easier and more cost-effective to ingest, store, analyze, and disseminate data.
  • Deploy healthcare and life sciences software with 1-Click ease — then evaluate and deploy it in minutes. Users can now speed up their historically slow cycles in software procurement and implementation.
  • Pay only for what’s consumed — and manage software costs on your AWS bill.
  • In addition to the already secure AWS Cloud, AWS Marketplace offers industry-leading solutions to help you secure operating systems, platforms, applications and data that can integrate with existing controls in your AWS Cloud and hybrid environment.

Click here to see who the current list of vendors are in our new Healthcare & Life Sciences category.

Come on In
If you are a healthcare ISV and would like to list and sell your products on AWS, visit our Sell in AWS Marketplace page.

– Wilson To and Luis Daniel Soto

Introducing Allgress Regulatory Product Mapping

by Ana Visneski | on | in AWS Marketplace | | Comments

This guest post is brought to you by Benjamin Andrew  and Tim Sandage.

-Ana


It’s increasingly difficult for organizations within regulated industries (such as government, financial, and healthcare) to demonstrate compliance with security requirements. The burden to comply is compounded by the use of legacy security frameworks and a lack of understanding of which services enable appropriate threat mitigations. It is further complicated by security responsibilities in relation to cloud computing, Internet of Things (IoT), and mobile applications.

Allgress helps minimize this burden by helping enterprise security and risk professionals assess, understand, and manage corporate risk. Allgress and AWS are working to offer a way to establish clear mappings from AWS services and 3rd party software solutions in AWS Marketplace to common security frameworks. The result for regulated customers within the AWS Cloud will be minimized business impact, increased security effectiveness, and reduced risk.

The name of this new solution is Allgress Marketplace Regulatory Product Mapping Tool (RPM) Allgress designed this tool specifically for customers deployed within AWS who want to reduce the complexity, increase the speed, and shorten the time frame of achieving compliance, including compliance with legislation such as Sarbanes Oxley, HIPAA, and FISMA. Allgress RPM is designed to achieve these results by letting customers quickly map their regulatory security frameworks (such as ISO, NIST, and PCI-DSS controls) to AWS services, solutions in AWS Markeplace, and APN technology partner solutions. The tool even guides customers through the compliance process, providing focused content every step of the way.

Here are the four simple steps to get a regulatory assessment:

  1. If you’re a new user, you can Login as a guest into the tool. Registration is not required. If you’re an existing user, you can log in using your Username and Password to return to a saved assessment:

01[1]

  1. Once you’ve logged in, you can select your Regulatory Security Framework (e.g. FedRAMP or PCI). After you’ve selected your framework, you have two explorer options: Coverage Overview and Product Explorer (explained in detail below).02[1]

The Coverage Overview includes three use cases: AWS customer controls review, regulatory requirement mapping, and gap-assessment planning. The Product Explorer tool provides detailed control coverage for the AWS services selected and/or all available AWS Marketplace vendor solutions.

  1. You can select Coverage Overview to review AWS Inherited, Shared, Operation, and AWS Marketplace Control mappings.03[1]

Coverage overview – This view breaks down security frameworks into four categories:

  1. AWS Inherited Controls — Controls that you fully inherit from AWS.
  2. AWS Shared Controls — AWS provides the control implementation for the infrastructure, and you provide your own control implementation within its use of AWS services. (e.g. Fault Tolerance)
  3. Operational Controls – These are procedural controls that AWS or an AWS consulting partner can help you implement within your AWS environment.
  4. AWS Marketplace Controls — These are technical controls that can be implemented (partially or fully) with an AWS technology partner and vendors from AWS Marketplace.

Note: Features in this tool include the ability to zoom into the controls using your mouse. With point-and-click ease, you can zoom in at the domain (Control Family) level, or into individual controls:

04[1]05[1]

  1. The additional capabilities within RPM is Product Explorer, which Identifies solutions in AWS Marketplace that can partially or fully implement the requirements of a security control. The screen below illustrates the 327 control for FedRAMP moderate — as well as several solutions available from software vendors on AWS Marketplace that can help remediate the control requirements.

06[1]

The Product Explorer page has several capabilities to highlight both service and control association:

  1. At the top of the page you can remove controls that do not currently have associated mapping.
  2. You can also zoom into Domains, Sub-domains, and Controls.
  3. You can select single products or multiple products with quick view options.
  4. You can select single or multiple products, and then select Product Cart to review detailed control implementations.

07_CORRECT[1]

Product Explorer Note: Non-associated controls have been removed in order to clearly see potential product mappings.

08[1]

Product Explorer — Zoom function for a specific control (e.g. AU-11) identifies all potential AWS services and associated products which can be leveraged for control implementation.

 09[1]

Product Explorer – Single product control coverage view. For a detail view you can Click on the Product Cart and view detailed implementation notes.

10[1]

Product Explorer – You can also add multiple services and solutions into a product cart and then connect to Marketplace for each software vendor solution available through our public managed software catalog.

11[1]

More about Allgres RPM
The AWS Services, Consulting, and Technology vendors that Allgress RPM is designed to map, have all demonstrated technical proficiency as a security solution, and can treat security controls across multiple regulated industries. At launch, RPM includes 10 vendors who all have deep experience working with regulated customers to deliver mission-critical workloads and applications on AWS. You can reach Allgress here.

View more Security solutions in AWS Marketplace. Please note that many of the products available in AWS Marketplace offer free trials. You can request free credits here: AWS Marketplace – Get Infrastructure Credits.

We wish to thank our launch partners, who worked with AWS and the Allgress team to map their products and services: Allgress, Alert Logic, Barracuda, Trend Micro, Splunk, Palo Alto Networks, OKTA, CloudCheckr, Evident.io and CIS (Center for Internet Security).

We wish to thank our launch partners, who worked with AWS and the Allgress team to map their products and services: Allgress, Alert Logic, Barracuda, Trend Micro, Splunk, Palo Alto Networks, OKTA, CloudCheckr, Evident.io and CIS (Center for Internet Security).

-Benjamin Andrew and Tim Sandage.

Amazon Chime – Unified Communications Service

by Jeff Barr | on | in Amazon Chime, Announcements, Launch | | Comments

If your working day is anything like mine, you probably spend a lot of time communicating with your colleagues. Every day, I connect with and collaborate with people all over the world. Some of them are sitting in their office in front of their PCs; others are on the go and using their phones to connect and to communicate. We chat informally, we meet on regular schedules, we exchange documents and images, and we share our screens.

For many years, most “business productivity” tools have been anything but. Many of these tools support just one or two modes of communication or styles of collaboration and can end up getting in the way. Licensing and training costs and a lack of support for collaboration that crosses organizational boundaries don’t make things any better.

Time to change that…

Introducing Amazon Chime
Today I would like to tell you about Amazon Chime. This is a new unified communication service that is designed to make meetings easier and more efficient than ever before. Amazon Chime lets you start high-quality audio and video meetings with a click. Once you are in the meeting you can chat, share content, and share screens in a smooth experience that spans PC and Mac desktops, iOS devices, and Android devices.

Because Amazon Chime is a fully managed service, there’s no upfront investment, software deployment, or ongoing maintenance. Users simply download the Amazon Chime app and start using it within minutes.

Let’s take a quick look at some of the most important features of Amazon Chime:

On-Time Meetings – You no longer need to dial in to meetings. There’s no need to enter long meeting identifiers or equally long passwords. Instead, Amazon Chime will alert you when the meeting starts, and allow you to join (or to indicate that you are running behind) with a single click or tap.

Meeting Roster – Instead of endless “who just joined” queries, Amazon Chime provides a visual roster of attendees, late-comers, and those who skipped out entirely. It also provides broadly accessible mute controls in case another participant is typing or their dog is barking.

Broad AccessAmazon Chime was built for mobile use, with apps that run on PCs and mobile devices. Even better, Amazon Chime allows you to join a meeting from one device and then seamlessly switch to another.

Easy Sharing – Collaborating is a core competency for Amazon Chime. Meeting participants can share their screens as desired, with no need to ask for permission. Within Amazon Chime‘s chat rooms, participants can work together and create a shared history that is stored in encrypted fashion.

Clear CallsAmazon Chime delivers high quality noise-cancelled audio and crisp, clear HD video that works across all user devices and with most conference room video systems.

Amazon Chime in Action
Let’s run through the most important aspects of Amazon Chime, starting with the main screen:

I can click on Meetings and then schedule a meeting in my Outlook calendar or my Google calendar:

Outlook scheduling makes use of the Amazon Chime add-in; I was prompted to install it when I clicked on Schedule with Outlook. I simply set up an invite as usual:

Amazon Chime lets me know when the meeting is starting:

I simply click on Answer and choose my audio option:

And my meeting is under way. I can invite others, share my screen or any desired window, use my webcam, and so forth:

I have many options that I can change while the meeting is underway:

Amazon Chime also includes persistent, 1 to 1 chat and chat rooms. Here’s how I create a new chat room:

After I create it I can invite my fellow bloggers and we can have a long-term, ongoing conversation.

As usual, I have only shown you a few of the features! To get started, visit the Amazon Chime site and try it out for yourself.

Amazon Chime Editions
Amazon Chime is available in three editions:

  • Basic Edition is available at no charge. It allows you to attend meetings, make 1 to 1 video calls, and to use all Amazon Chime chat features.
  • Plus Edition costs $2.50 per user per month. It allows user management of entire email domains, supports 1 GB of message retention per user, and connects to Active Directory.
  • Pro Edition costs $15.00 per user per month. It allows hosting of meetings of up to 100 people.

Amazon Chime Pro is free to try for 30 days, with no credit card required. After 30 days, you can continue to use Amazon Chime Basic for free, for as long as you’d like, or you can purchase Amazon Chime Pro for $15.00 per user per month. There is no upfront commitment, and you can change or cancel your subscription at any time.

Available Now
Amazon Chime is available now and you can sign up to start using it today!

Jeff;

 

Amazon EBS Update – New Elastic Volumes Change Everything

by Jeff Barr | on | in Amazon EC2, Amazon Elastic Block Store, Launch | | Comments

It is always interesting to speak with our customers and to learn how the dynamic nature of their business and their applications drives their block storage requirements. These needs change over time, creating the need to modify existing volumes to add capacity or to change performance characteristics. Today’s 24×7 operating models leaves no room for downtime; as a result, customers want to make changes without going offline or otherwise impacting operations.

Over the years, we have introduced new EBS offerings that support an ever-widening set of use cases. For example, we introduced two new volume types in 2016 – Throughput Optimized HDD (st1) and Cold HDD (sc1). Our customers want to use these volume types as storage tiers, modifying the volume type to save money or to change the performance characteristics, without impacting operations.

In other words, our customers want their EBS volumes to be even more elastic!

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

Using Elastic Volumes
You can manage all of this from the AWS Management Console, via API calls, or from the AWS Command Line Interface (CLI).

To make a change from the Console, simply select the volume and choose Modify Volume from the Action menu:

Then make any desired changes to the volume type, size, and Provisioned IOPS (if appropriate). Here I am changing my 75 GiB General Purpose (gp2) volume into a 400 GiB Provisioned IOPS volume, with 20,000 IOPS:

When I click on Modify I confirm my intent, and click on Yes:

The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):

The next step is to expand the file system so that it can take advantage of the additional storage space. To learn how to do that, read Expanding the Storage Space of an EBS Volume on Linux or Expanding the Storage Space of an EBS Volume on Windows. You can expand the file system as soon as the state transitions to optimizing (typically a few seconds after you start the operation). The new configuration is in effect at this point, although optimization may continue for up to 24 hours. Billing for the new configuration begins as soon as the state turns to optimizing (there’s no charge for the modification itself).

Automatic Elastic Volume Operations
While manual changes are fine, there’s plenty of potential for automation. Here are a couple of ideas:

Right-Sizing – Use a CloudWatch alarm to watch for a volume that is running at or near its IOPS limit. Initiate a workflow and approval process that could provision additional IOPS or change the type of the volume. Or, publish a “free space” metric to CloudWatch and use a similar approval process to resize the volume and the filesystem.

Cost Reduction – Use metrics or schedules to reduce IOPS or to change the type of a volume. Last week I spoke with a security auditor at a university. He collects tens of gigabytes of log files from all over campus each day and retains them for 60 days. Most of the files are never read, and those that are can be scanned at a leisurely pace. They could address this use case by creating a fresh General Purpose volume each day, writing the logs to it at high speed, and then changing the type to Throughput Optimized.

As I mentioned earlier, you need to resize the file system in order to be able to access the newly provisioned space on the volume. In order to show you how to automate this process, my colleagues built a sample that makes use of CloudWatch Events, AWS Lambda, EC2 Systems Manager, and some PowerShell scripting. The rule matches the modifyVolume event emitted by EBS and invokes the logEvents Lambda function:

The function locates the volume, confirms that it is attached to an instance that is managed by EC2 Systems Manager, and then adds a “maintenance tag” to the instance:

from __future__ import print_function
import boto3
ec2 = boto3.client('ec2')
ssm = boto3.client('ssm')
tags = ['maintenance']

def lambda_handler(event, context):
    volume = [event['resources'][0].split('/')[1]]
    attach = ec2.describe_volumes(VolumeIds=volume)['Volumes'][0]['Attachments']
    if attach:
        instance = attach[0]['InstanceId']
        filters = [{'key': 'InstanceIds', 'valueSet': [instance]}]
        info = ssm.describe_instance_information(
            InstanceInformationFilterList=filters)['InstanceInformationList']
        if info:
            ec2.create_tags(Resources=[instance], Tags=tags)
            print('{} Instance {} has been tagged for maintenance'.format(info[0]['PlatformName'], instance))

Later (either manually or on a schedule), EC2 Systems Manager is used to run a PowerShell script on all of the instances that are tagged for maintenance. The script looks at the instance’s disks and partitions, and resizes all of the drives (filesystems) to the maximum allowable size. Here’s an excerpt:

foreach ($DriveLetter in $DriveLetters) {
	$Error.Clear()
        $SizeMax = (Get-PartitionSupportedSize -DriveLetter $DriveLetter).SizeMax
}

Available Today
The Elastic Volumes feature is available today and you can start using it right now!

To learn about some important special cases and a few limitations on instance types, read Considerations When Modifying EBS Volumes.

Jeff;

PS – If you would like to design and build cool, game-changing storage services like EBS, take a look at our EBS Jobs page!

 

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

by Jeff Barr | on | in Amazon VPC, AWS Direct Connect | | Comments

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

Amazon Rekognition Update – Estimated Age Range for Faces

by Jeff Barr | on | in Amazon Rekognition, Launch | | Comments

Amazon Rekognition is one of our artificial intelligence services. In addition to detecting objects, scenes, and faces in images, Rekognition can also search and compare faces. Behind the scenes, Rekognition uses deep neural network models to analyze billions of images daily (read Amazon Rekognition – Image Detection and Recognition Powered by Deep Learning to learn more).

Amazon Rekognition returns an array of attributes for each face that it locates in an image. Today we are adding a new attribute, an estimated age range. This value is expressed in years, and is returned as a pair of integers. The age ranges can overlap; the face of a 5 year old might have an estimated range of 4 to 6 but the face of a 6 year old might have an estimated range of 4 to 8. You can use this new attribute to power public safety applications, collect demographics, or to assemble a set of photos that span a desired time frame.

In order to have some fun with this new feature (I am writing this post on a Friday afternoon), I dug into my photo archives and asked Rekognition to estimate my age. Here are the results.

Let’s start at the beginning! I was probably about 2 years old here:

This picture was taken at my grandmother’s house in the spring of 1966:

I was 6 years old; Rekognition estimated that I was between 6 and 13:

My first official Amazon PR photo from 2003 when I was 43:

That’s a range of 17 years and my actual age was right in the middle.

And my most recent (late 2015) PR photo, age 55:

Again a fairly wide range, and I’m right in the middle of it! In general, Rekognition the actual age for each face will fall somewhere within the indicated range, but you should not count on it falling precisely in the middle.

This feature is available now and you can start using it today.

Jeff;