Category: Security


Amazon EC2 Systems Manager Patch Manager now supports Linux

Hot on the heels of some other great Amazon EC2 Systems Manager (SSM) updates is another vital enhancement: the ability to use Patch Manager on Linux instances!

We launched Patch Manager with SSM at re:Invent in 2016 and Linux support was a commonly requested feature. Starting today we can support patch manager in:

  • Amazon Linux 2014.03 and later (2015.03 and later for 64-bit)
  • Ubuntu Server 16.04 LTS, 14.04 LTS, and 12.04 LTS
  • RHEL 6.5 and later (7.x and later for 64-Bit)

When I think about patching a big group of heterogenous systems I get a little anxious. Years ago, I administered my school’s computer lab. This involved a modest group of machines running a small number of VMs with an immodest number of distinct Linux distros. When there was a critical security patch it was a lot of work to remember the constraints of each system. I remember having to switch back and forth between arcane invocations of various package managers – pinning and unpinning packages: sudo yum update -y, rpm -Uvh ..., apt-get, or even emerge (one of our professors loved Gentoo).

Even now, when I use configuration management systems like Chef or Puppet I still have to specify the package manager and remember a portion of the invocation – and I don’t always want to roll out a patch without some manual approval process. Based on these experiences I decided it was time for me to update my skillset and learn to use Patch Manager.

Patch Manager is a fully-managed service (provided at no additional cost) that helps you simplify your operating system patching process, including defining the patches you want to approve for deployment, the method of patch deployment, the timing for patch roll-outs, and determining patch compliance status across your entire fleet of instances. It’s extremely configurable with some sensible defaults and helps you easily deal with patching hetergenous clusters.

Since I’m not running that school computer lab anymore my fleet is a bit smaller these days:

a list of instances with amusing names

As you can see above I only have a few instances in this region but if you look at the launch times they range from 2014 to a few minutes ago. I’d be willing to bet I’ve missed a patch or two somewhere (luckily most of these have strict security groups). To get started I installed the SSM agent on all of my machines by following the documentation here. I also made sure I had the appropriate role and IAM profile attached to the instances to talk to SSM – I just used this managed policy: AmazonEC2RoleforSSM.

Now I need to define a Patch Baseline. We’ll make security updates critical and all other updates informational and subject to my approval.

 

Next, I can run the AWS-RunPatchBaseline SSM Run Command in “Scan” mode to generate my patch baseline data.

Then, we can go to the Patch Compliance page in the EC2 console and check out how I’m doing.

Yikes, looks like I need some security updates! Now, I can use Maintenance Windows, Run Command, or State Manager in SSM to actually manage this patching process. One thing to note, when patching is completed, your machine reboots – so managing that roll out with Maintenance Windows or State Manager is a best practice. If I had a larger set of instances I could group them by creating a tag named “Patch Group”.

For now, I’ll just use the same AWS-RunPatchBaseline Run Command command from above with the “Install” operation to update these machines.

As always, the CLIs and APIs have been updated to support these new options. The documentation is here. I hope you’re all able to spend less time patching and more time coding!

Randall

Prepare for the OWASP Top 10 Web Application Vulnerabilities Using AWS WAF and Our New White Paper

Are you aware of the Open Web Application Security Project (OWASP) and the work that they do to improve the security of web applications? Among many other things, they publish a list of the 10 most critical application security flaws, known as the OWASP Top 10. The release candidate for the 2017 version contains a consensus view of common vulnerabilities often found in web sites and web applications.

AWS WAF, as I described in my blog post, New – AWS WAF, helps to protect your application from application-layer attacks such as SQL injection and cross-site scripting. You can create custom rules to define the types of traffic that are accepted or rejected.

Our new white paper, Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities, shows you how to put AWS WAF to use. Going far beyond a simple recommendation to “use WAF,” it includes detailed, concrete mitigation strategies and implementation details for the most important items in the OWASP Top 10 (formally known as A1 through A10):

Download Today
The white paper provides background and context for each vulnerability, and then shows you how to create WAF rules to identify and block them. It also provides some defense-in-depth recommendations, including a very cool suggestion to use Lambda@Edge to prevalidate the parameters supplied to HTTP requests.

The white paper links to a companion AWS CloudFormation template that creates a Web ACL, along with the recommended condition types and rules. You can use this template as a starting point for your own work, adding more condition types and rules as desired.

AWSTemplateFormatVersion: '2010-09-09'
Description: AWS WAF Basic OWASP Example Rule Set

## ::PARAMETERS::
## Template parameters to be configured by user
Parameters:
  stackPrefix:
    Type: String
    Description: The prefix to use when naming resources in this stack. Normally we would use the stack name, but since this template can be us\
ed as a resource in other stacks we want to keep the naming consistent. No symbols allowed.
    ConstraintDescription: Alphanumeric characters only, maximum 10 characters
    AllowedPattern: ^[a-zA-z0-9]+$
    MaxLength: 10
    Default: generic
  stackScope:
    Type: String
    Description: You can deploy this stack at a regional level, for regional WAF targets like Application Load Balancers, or for global targets\
, such as Amazon CloudFront distributions.
    AllowedValues:
      - Global
      - Regional
    Default: Regional
...

Attend our Webinar
If you would like to learn more about the topics discussed in this new white paper, please plan to attend our upcoming webinar, Secure Your Applications with AWS Web Application Firewall (WAF) and AWS Shield. On July 12, 2017, my colleagues Jeffrey Lyon and Sundar Jayashekar will show you how to secure your web applications and how to defend against the most common Layer 7 attacks.

Jeff;

 

 

 

Scale Your Security Vulnerability Testing with Amazon Inspector

My colleague Eric Fitzgerald wrote the guest post below in order to show you how to use an AWS Lambda function to forward Amazon Inspector findings to your ticketing and workflow systems.

Jeff;


At AWS Re:Invent 2015 we announced Amazon Inspector, our security vulnerability assessment service that helps customers test for security vulnerabilities early and often.  Using Amazon Inspector, customers can automate security testing across development, test, and production environments, identifying security vulnerabilities as part of the entire software development, deployment, and operations lifecycle.

Customer feedback on the Amazon Inspector approach to automated security testing has been overwhelming positive.  Customers have told us that with Amazon Inspector, they are able to run security assessments more frequently and are catching security vulnerabilities earlier than they have in the past.  However, identifying the security vulnerabilities is only half the battle, the vulnerabilities that are found need to be remediated. Many of our customers have started to integrate Amazon Inspector with their workflow and ticketing systems in order to automate and accelerate the remediation workflow for Amazon Inspector findings.  We designed Amazon Inspector with this in mind and thought we would share more detail on one method for integrating Amazon Inspector findings with email, workflow, and ticketing systems.

Using AWS Lambda to push Amazon Inspector Findings to a Ticketing System
In this example, we are using an AWS Lambda function to connect Amazon Inspector to systems that can handle incident creation via email. Here’s the chain of events:

  1. Amazon Inspector runs and performs a security assessment. It sends a message to an Amazon Simple Notification Service (SNS) topic at the end of the run.
  2. The Lambda function is invoked by the SNS message.
  3. The function fetches the findings from the security assessment.
  4. The function formats and emails the findings using another SNS topic.

Along the way, the function creates the destination topic and the email subscription if necessary.

Setting up the Function
You will need to set up the function in the AWS Region where you run your Amazon Inspector assessments. If you run Amazon Inspector in more than one region, you’ll need to repeat the steps for each one. Here are the steps:

  1. Create the SNS topic for Amazon Inspector.
  2. Configure Amazon Inspector to send findings to the newly created topic.
  3. Set up the Lambda function to fetch, format, and email the findings.

Configure an SNS Topic
The first major step is to configure an Amazon SNS topic that Amazon Inspector will notify when there are new findings, and an Amazon SNS topic that will format and send findings as email to other systems.

Navigate to the Amazon SNS Console and create a new Amazon SNS topic.  This will be the topic where Amazon Inspector will deliver notifications to.  It does not matter what you name the topic.

Next, assign the following policy to the topic.  You can do this in the Amazon SNS Console by selecting the topic, clicking on Other topic actions, and selecting Edit topic policy.  In the advanced view, replace the existing policy text with this policy:

{
  "Version": "2008-10-17",
  "Id": "inspector-sns-publish-policy",
  "Statement": [
    {
      "Sid": "inspector-sns-publish-statement",
      "Effect": "Allow",
      "Principal": {
        "Service": "inspector.amazonaws.com"
      },
      "Action": "SNS:Publish",
      "Resource": "arn:aws:sns:*"
    }
  ]
}

If you are familiar with AWS Identity and Access Management (IAM) policies, then a security best practice is to change the value of the Resource field of the policy to exactly match the Amazon SNS topic ARN, in order to restrict Amazon Inspector so that it can only publish to this topic.

Configure Amazon Inspector
Navigate to the Amazon Inspector Console, visit the Assessment templates page, and select the assessment template whose findings you want sent to the external system.  Expand the row, and you’ll see a section called SNS topics.  Click the pencil icon to the left of the Amazon SNS topics section and you’ll be able to pick the Amazon SNS topic you just created from a drop-down list.  Once you’ve selected the topic, click on Save.

Set up the Lambda Function
Navigate to the Lambda Console and create a new function using the SNS-message-python blueprint:

Select SNS for the event source and then select the SNS topic that you created in the first step:

To finish configuring the function, click Next.  Type a name and description for the function, choose the Python 2.7 runtime, and replace the sample function code with this code:

from __future__ import print_function
import boto3
import json
import datetime

sns = boto3.client('sns')
inspector = boto3.client('inspector')

# SNS topic - will be created if it does not already exist
SNS_TOPIC = "Inspector-Finding-Delivery"

# Destination email - will be subscribed to the SNS topic if not already
DEST_EMAIL_ADDR = "eric@example.com"

# quick function to handle datetime serialization problems
enco = lambda obj: (
    obj.isoformat()
    if isinstance(obj, datetime.datetime)
    or isinstance(obj, datetime.date)
    else None
)

def lambda_handler(event, context):

    # extract the message that Inspector sent via SNS
    message = event['Records'][0]['Sns']['Message']

    # get inspector notification type
    notificationType = json.loads(message)['event']

    # skip everything except report_finding notifications
    if notificationType != "FINDING_REPORTED":
        print('Skipping notification that is not a new finding: ' + notificationType)
        return 1
    
    # extract finding ARN
    findingArn = json.loads(message)['finding']

    # get finding and extract detail
    response = inspector.describe_findings(findingArns = [ findingArn ], locale='EN_US')
    print(response)
    try:
        finding = response['findings'][0]
    except OSError as err:
        print("OS error: {0}".format(err))
    except:
        print("Unexpected error:", sys.exc_info()[0])
        raise
        
    # skip uninteresting findings
    title = finding['title']
    if title == "Unsupported Operating System or Version":
        print('Skipping finding: ', title)
        return 1
        
    if title == "No potential security issues found":
        print('Skipping finding: ', title)
        return 1
    
    # get the information to send via email
    subject = title[:100] # truncate @ 100 chars, SNS subject limit
    messageBody = "Title:\n" + title + "\n\nDescription:\n" + finding['description'] + "\n\nRecommendation:\n" + finding['recommendation']
    
    # un-comment the following line to dump the entire finding as raw json
    # messageBody = json.dumps(finding, default=enco, indent=2)

    # create SNS topic if necessary
    response = sns.create_topic(Name = SNS_TOPIC)
    snsTopicArn = response['TopicArn']

    # check to see if the subscription already exists
    subscribed = False
    response = sns.list_subscriptions_by_topic( TopicArn = snsTopicArn )

    # iterate through subscriptions array in paginated list API call
    while True:
        for subscription in response['Subscriptions']:
            if ( subscription['Endpoint'] == DEST_EMAIL_ADDR ):
                subscribed = True
                break
        
        if 'NextToken' not in response:
            break
        
        response = sns.list_subscriptions_by_topic(
            TopicArn = snsTopicArn,
            NextToken = response['NextToken']
            )
        
    # create subscription if necessary
    if ( subscribed == False ):
        response = sns.subscribe(
            TopicArn = snsTopicArn,
            Protocol = 'email',
            Endpoint = DEST_EMAIL_ADDR
            )

    # publish notification to topic
    response = sns.publish(
        TopicArn = snsTopicArn,
        Message = messageBody,
        Subject = subject
        )

    return 0

Be sure to edit the DEST_EMAIL_ADDR value, and put in the actual email address that is used to send incidents to your incident management system. Optionally, you can change the name of the SNS topic that Amazon Inspector will use to send findings.

Leave the function handler (lambda_function.lambda_handler) as-is, and give the function a name:

Choose  *basic execution role from the Role drop-down. After Lambda navigates to a new page,  view the policy document, and use this one instead:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "inspector:DescribeFindings",
                "SNS:CreateTopic",
                "SNS:Subscribe",
                "SNS:ListSubscriptionsByTopic",
                "SNS:Publish"
            ],
            "Resource": "*"
        }
    ]
}

Click the Allow button to create the role and return to AWS Lambda, then leave the advanced settings as-is.

Be sure to click on Enable event source  on the Review page:

Click Create function to save the function.

And that’s it!

Ready to Roll
For any assessments where you want the findings sent to another system, just add the first Amazon SNS topic (the one you created with these instructions) to the assessment template, and ensure that new finding reports are selected for publication to that topic.

The first time you run an assessment, Amazon Inspector will notify Lambda that you have new findings, and the Lambda function that you just created will create the SNS topic (if it doesn’t already exist), subscribe the destination email address to the topic (if not already subscribed), and send the findings as email to that address.  If Lambda had to subscribe the email address to the topic, then you’ll only get one email requiring you to click a link to confirm that you want to subscribe.  After confirmation, Amazon Inspector will deliver findings to that email address.

If you want to connect to Atlassian’s Jira Service Desk, it’s super easy from here on out.  In Jira ServiceDesk, navigate to Customer Channels.  This will display the email address that can receive email and create new issues.  Put that email address into the Lambda function’s Python script and that’s where Inspector will deliver its findings.  ServiceDesk will automatically turn them into ServiceDesk issues, and you can manage your workflow there.

Stay Tuned
Thank you for using Amazon Inspector, and look for more from us soon!

Eric Fitzgerald, Principal Security Engineer

New – Cross-Account Copying of Encrypted EBS Snapshots

AWS already supports the use of encrypted Amazon Elastic Block Store (EBS) volumes and snapshots, with keys stored in and managed by AWS Key Management Service (KMS). It also supports copying of EBS snapshots with other AWS accounts so that they can be used to create new volumes. Today we are joining these features to give you the ability to copy encrypted EBS snapshots between accounts, with the flexibility to move between AWS regions as you do so.

This announcement builds on three important AWS best practices:

  1. Take regular backups of your EBS volumes.
  2. Use multiple AWS accounts, one per environment (dev, test, staging, and prod).
  3. Encrypt stored data (data at rest), including backups.

Encrypted EBS Volumes & Snapshots
As a review, you can create an encryption key using the IAM Console:

And you can create an encrypted EBS volume by specifying an encryption key (you must use a custom key if you want to copy a snapshot to another account):

Then you can create an encrypted snapshot from the volume:

As you can see, I have already enabled the longer volume and snapshot IDs for my AWS account (read They’re Here – Longer EBS and Storage Gateway Resource IDs Now Available for more information).

Cross-Account Copying
None of what I have shown you so far is new. Let’s move on to the new part! To create a copy of the encrypted EBS snapshot in another account you need to complete four simple steps:

  1. Share the custom key associated with the snapshot with the target account.
  2. Share the encrypted EBS snapshot with the target account.
  3. In the context of the target account, locate the shared snapshot and make a copy of it.
  4. Use the newly created copy to create a new volume.

You will need the target account number in order to perform the first two steps. Here’s how you share the custom key with the target account from within the IAM Console:

Then you share the encrypted EBS snapshot. Select it and click on Modify Permissions:

Enter the target account number again and click on Save:

Note that you cannot share the encrypted snapshots publicly.

Before going any further I should say a bit about permissions! Here’s what you need to know in order to set up your policies and/or roles:

Source Account – The IAM user or role in the source account needs to be able to call the ModifySnapshotAttribute function and to perform the DescribeKey and ReEncypt operations on the key associated with the original snapshot.

Target Account – The IAM user or role in the target account needs to be able perform the DescribeKey, CreateGrant, and Decrypt operations on the key associated with the original snapshot. The user or role must also be able to perform the CreateGrant, Encrypt, Decrypt, DescribeKey, and GenerateDataKeyWithoutPlaintext operations on the key associated with the call to CopySnapshot.

With that out of the way, let’s copy the snapshot…

Switch to the target account, visit the Snapshots tab, and click on Private Snapshots. Locate the shared snapshot via its Snapshot ID (the name is stored as a tag and is not copied), select it, and choose the Copy action:

Select an encryption key for the copy of the snapshot and create the copy (here I am copying my snapshot to the Asia Pacific (Tokyo) Region):

Using a new key for the copy provides an additional level of isolation between the two accounts. As part of the copy operation, the data will be re-encrypted using the new key.

Available Now
This feature is available in all AWS Regions where AWS Key Management Service (KMS) is available. It is designed for use with data & root volumes and works with all volume types, but cannot be used to share encrypted AMIs at this time. You can use the snapshot to create an encrypted boot volume by copying the snapshot and then registering it as a new image.

Jeff;

 

Amazon RDS Update – Share Encrypted Snapshots, Encrypt Existing Instances

We want to make it as easy as possible for you to secure your AWS environment. Some of our more recent announcements in this area include encrypted EBS boot volumes, encryption at rest for Amazon Aurora, and support for AWS Key Management Service (KMS) across several different services.

Today we are giving you some additional options for data stored in Amazon Relational Database Service (RDS). You can now share encrypted database snapshots with other AWS accounts. You can also add encryption to a previously unencrypted database instance.

Sharing Encrypted Snapshots
When you are using encryption at rest for a database instance, automatic and manual database snapshots of the instance are also encrypted. Up until now, encrypted snapshots were private to a single AWS account and could not be shared. Today we are giving you the ability to share encrypted snapshots with up to 20 other AWS accounts. You can do this from the AWS Management Console, AWS Command Line Interface (CLI), or via the RDS API. You can share encrypted snapshots within an AWS region, but you cannot share them publicly. As is the case with the existing sharing feature, today’s release applies to manual snapshots.

To share an encrypted snapshot, select it and click on Share Snapshot. This will open up the Manage Snapshot Permissions page. Enter one or more account IDs (click on Add after each one) and click on Save when you have entered them all:

The accounts could be owned by your organization (perhaps you have separate accounts for dev, test, staging, and production) or by your business partners. Backing up to your mission-critical databases to a separate AWS account is a best practice, and one that you can implement using this new feature while also gaining the benefit of encryption at rest.

After you click on Save, the other accounts have access to the shared snapshots. The easiest way to locate them is to visit the RDS Console and filter the list using Shared with Me:

The snapshot can be used to create a new RDS database instance. To learn more, read about Sharing a Database Snapshot.

Adding Encryption to Existing Database Instances
You can now add encryption at rest using KMS keys to a previously unencrypted database instance. This is a simple, multi-step process:

  1. Create a snapshot of the unencrypted database instance.
  2. Copy the snapshot to a new, encrypted snapshot. Enable encryption and specify the desired KMS key as you do so:
  3. Restore the encrypted snapshot to a new database instance:
  4. Update your application to refer to the endpoint of the new database instance:

And that’s all you need to do! You can use a similar procedure to change encryption keys for existing database instances. To learn more, read about Copying a Database Snapshot.

Jeff;

 

New AWS Enterprise Accelerator – Standardized Architecture for NIST 800-53 on the AWS Cloud

In the early days of AWS, customers were happy to simply learn about the cloud and its benefits. As they started to learn more, the conversation shifted. It went from “what is the cloud” to “what kinds of security does the cloud offer” to “”how can I use the cloud” over the course of just 6 or 7 years. As the industry begins to mature, enterprise and government customers are now interested in putting the cloud to use in a form that complies with applicable standards and recommendations.

For example, National Institute of Standards and Technology (NIST) Special Publication 800-53 (Security and Privacy Controls for Federal Information Systems and Organizations) defines a set of information and security controls that are designed to make systems more resilient to many different types of threats. This document is accompanied by a set of certifications, accreditations, and compliance processes.

New Compliance Offerings
In order to simplify the task of building a system that is in accord with compliance standards of this type, we will be publishing a series of AWS Enterprise Accelerator – Compliance Quick Starts. These documents and CloudFormation templates are designed to help Managed Service Organizations, cloud provisioning teams, developers, integrators, and information system security officers.

The new AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST 800-53 on the AWS Cloud is our first offering in this series!

The accelerator contains a set of nested CloudFormation templates. Deploying the top-level template takes about 30 minutes and creates all of the necessary AWS resources. The resources include three Virtual Private Clouds (VPCs)—Management, Development, and Production—suitable for running a multi-tier Linux-based application.

The template also creates the necessary IAM roles and custom policies, VPC security groups, and the like. It launches EC2 instances and sets up an encrypted, Multi-AZ MySQL database (using Amazon Relational Database Service (RDS)) in the Development and Production VPCs.

The architecture defined by this template makes use AWS best practices for security and availability including the use of a Multi-AZ architecture, isolation of instances between public and private subnets, monitoring & logging, database backup, and encryption.

You also have direct access to the templates. You can download them, customize them, and extract interesting elements for use in other projects.

You can also add the templates for this Quick Start to the AWS Service Catalog as portfolios or as products. This will allow you to institute a centrally managed model, and will help you to support consistent governance, security, and compliance.

Jeff;

 

New – AWS Certificate Manager – Deploy SSL/TLS-Based Apps on AWS

I am fascinated by things that are simple on the surface and complex underneath! For example, consider the popular padlock icon that is used to signify that traffic to and from a web site is encrypted:

How does the browser know that it should display the green padlock? Well, that’s quite the story! It all starts with a digital file known as an SSL/TLS certificate.  This is an electronic document that is used to establish identity and trust between two parties. In this case, the two parties are the web site and the web browser.

SSL/TLS is a must-have whenever sensitive data is moved back and forth. For example, sites that need to meet compliance requirements such as PCI-DSS, FedRAMP, and HIPAA make extensive use of SSL/TLS.

Certificates are issued to specific domains by Certificate Authorities, also known as CAs. When you want to obtain a certificate for your site, the CA will confirm that you are responsible for the domain. Then it will issue a certificate that is valid for a specific amount of time, and only for the given domain (subdomains are allowed). Traditionally, you were also responsible for installing the certificate on your system, tracking expiration dates, and getting fresh certificates from time to time (typically, certificates are valid for a period of 12 months).

Each certificate is digitally signed; this allows the browser to verify that it was issued by a legitimate CA. To be a bit more specific, browsers start out with a small, predefined list of root certificates and use them to verify that the other certificates can be traced back to the root. You can access this information from your browser:

As you can probably see from what I have outlined above (even though I have hand-waved past a lot of interesting details), provisioning and managing SSL/TLS certificates can entail a lot of work, far too much of it manual and not easily automated. In many cases you also need to pay an annual fee for each certificate.

Time to change that!

New AWS Certificate Manager
The new AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with management of SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates! Certificates provided by ACM are verified by Amazon’s certificate authority (CA), Amazon Trust Services (ATS).

Even better, you can do all of this at no extra cost. SSL/TLS certificates provisioned through AWS Certificate Manager are free!

ACM will allow you to start using SSL in a matter of minutes. After your request a certificate, you can deploy it to your Elastic Load Balancers and your Amazon CloudFront distributions with a couple of clicks. After that, ACM can take care of the periodic renewals without any action on your part.

Provisioning and Deploying a Certificate
Let’s step through the process of provisioning and deploying a digital certificate using the console (APIs are also available). I’ll use one of my own domains (jeff-barr.com) for this exercise. I start by opening the AWS Certificate Manager Console and clicking on Get started.

Then I enter the domain name of the site that I want to secure. In this case I want to secure the “naked” domain and all of the first-level sub-domains within it:

Then I review my request and confirm my intent:

I flip over to my inbox, find the email or emails (one per domain) from Amazon (certificates.amazon.com), and click on Amazon Certificate Approvals:

I visit the site and click on I Approve:

And that’s all it takes! The certificate is now visible in the console:

Deploying the Certificate
After the certificate is issued, I can deploy it to my Elastic Load Balancers and/or CloudFront distributions.

Because ELB supports SSL offload, deploying a certificate to a load balancer (rather than to the EC2 instances behind it) will reduce the amount of encryption and decryption work that the instances need to handle.

And for a CloudFront distribution:

Available Now
AWS Certificate Manager (ACM) is available now in the US East (Northern Virginia) region, with additional regions in the works. You can provision, deploy, and renew certificates at no charge.

We plan to add support for other AWS services and for other types of domain validation. As always, your suggestions and feedback are more than welcome and will help us to prioritize our work.

If you are using AWS Elastic Beanstalk, take a look at Enabling SSL/TLS (for free) via AWS Certificate Manager.

Jeff;

New – GxP Compliance Resource for AWS

Ever since we launched AWS, customers have been curious about how they can use it to build and run applications that must meet many different types of regulatory requirements. For example, potential AWS users in the pharmaceutical, biotech, and medical device industries are subject to a set of guidelines and practices that are commonly known as GxP. In those industries, the x can represent Laboratory (GLP), Clinical (GCP), or Manufacturing (GMP).

These practices are intended to ensure that a product is safe and that it works as intended. Many of the practices are focused on traceability (the ability to reconstruct the development history of a drug or medical device) and accountability (the ability to learn who has contributed what to the development, and when they did it). For IT pros in regulated industries, GxP is important because it has requirements on how electronic records are stored, as well as how the systems that store these records are tested and maintained.

Because the practices became prominent at a time when static, on-premises infrastructure was the norm, companies have developed practices that made sense in this environment but not in the cloud. For example, many organizations perform point-in-time testing of their on-premises infrastructure and are not taking advantage of the all that the cloud has to offer. With the cloud, practices such as dynamic verification of configuration changes, compliance-as-code, and the use of template-driven infrastructure are easy to implement and can have important compliance benefits.

New Resource
Customers are already running GxP-workloads on AWS! In order to help speed the adoption for other pharma and medical device manufacturers, we are publishing our new GxP compliance resource today.

The GxP position paper (Considerations for Using AWS Products in GxP Systems) provides interested parties with a brief overview of AWS and of the principal services, and then focuses on a discussion of how they can be used in a GxP system. The recommendations within the paper fit in to three categories:

Quality Systems – This section addresses management, personnel, audits, purchasing controls, product assessment, supplier evaluation, supplier agreement, and records & logs.

System Development Life Cycle – This section addresses system development, validation, and operation. As I read this section of the document, it was interesting to learn how the software-defined infrastructure-as-code AWS model allows for better version control and is a great fit for GxP. The ability to use a common set of templates for development, test, and production environments that are all configured in the same way simplifies and streamlines several aspects of GxP compliance.

Regulatory Affairs – This section addresses regulatory submissions, inspections by health authorities, and personal data privacy controls.

We hired Lachman Consultants (an internationally renowned compliance consulting firm), and had them contribute to and review an earlier draft of the position paper. The version that we are publishing today reflects their feedback.

Join our Webinar
If you are interested in building cloud-based systems that must adhere to GxP, please join our upcoming GxP Webinar. Scheduled for February 23, this webinar will give you an overview of the new GxP compliance resource and will show you how AWS can facilitate GxP compliance within your organization. You’ll learn about rules-based consistency, compliance-as-code, repeatable software-based testing, and much more.

Jeff;

PS – The AWS Life Sciences page is another great resource!

AWS Certification Update – ISO 27017

I am happy to announce that AWS has achieved ISO 27017 certification. This new criteria builds upon the ISO 27002 standard, with additional controls specifically applicable to cloud service providers. AWS is the first cloud provider to obtain this certification, which is available now for download on our compliance site. Additionally, we’ve posted a Frequently Asked Questions around ISO 27017 should you want to learn more about the regions and services included in the certification.

This certification is certainly good news for customers, providing additional transparency and independent assurance that we follow this internationally recognized cloud security code of practice. However, certifying that we follow yet another best practice won’t come as a surprise; we’ve already proven that information security is job #1 here at AWS. We have made massive investments in protecting customer data – investments that you, our customers, inherit when using our services. Global customers from a wide range of regulated industries (including healthcare, life sciences, federal and state governments, financial services, and public safety) continue to accelerate their use of AWS for their most critical and regulated workloads. Yes, our certifications and attestations are significant, but even more critical is the ability for you, on top of these assurances, to build your own advanced security and compliance capabilities.

With AWS services, our customers have access to innovative new cloud security features such as Amazon Inspector, AWS WAF (Web Application Firewall), and AWS Config Rules. These tools enhance the ability to manage security while establishing reliable and ubiquitous controls in AWS environments, allowing for compliance in a more comprehensive and transparent manner.

At AWS we routinely attain certifications, demonstrating we have a world-class security program, but more importantly we want you to have a world-class security program as well. To learn more about the innovative and industry-leading security capabilities we offer, view the links above and watch Steve Schmidt’s Keynote at re:Invent.

 

To learn more about how our customers are running sensitive workloads on AWS, take a look at some case studies:

Healthcare and Life Sciences Financial Institutions Government / Public Sector Large Enterprise

Jeff;

 

 

In-Country Storage of Personal Data

My colleague Denis Batalov works out of the AWS Office in Luxembourg.  As a Solutions Architect, he is often asked about the in-country storage requirement that some countries impose on certain types of data. Although this requirement applies to a relatively small number of workloads, I am still happy that he took the time to write the guest post below to share some of his knowledge.

— Jeff;


AWS customers sometimes offer their services in countries where local requirements necessitate storage and processing of certain sensitive data to take place within the applicable country, that is, in a datacenter physically located in the respective country. Examples of such sensitive data include financial transactions and personal data (also referred to in some countries as Personally Identifiable Information, or PII).  Depending on the specific storage and processing requirements, one answer might be to utilize hybrid architectures where the component of the system that is responsible for collecting, storing and processing the sensitive data is placed in-country, while the remaining system resides in AWS. More information about hybrid architectures in general can be found on the Hybrid Architectures with AWS page.

The reference architecture diagram included below shows an example of a hypothetical web application hosted on AWS that collects personal data as part of its operation.  Since the collection of personal data may be required to occur in-country, the widget or form that is used to collect or display personal data (shown in red) is generated by a web server located in-country, while the rest of the web site (shown in green) is generated by the usual web server located in AWS. This way the authoritative copy of the personal data resides in-country and all updates to the data are also recorded in-country. Note that the data that is not required to be stored in-country can continue to be stored in the main database (or databases) residing in AWS.

This architecture still provides customers with the most important benefits of the cloud: it is flexible, scalable, and cost-effective.

There may be situations where a copy of personal data needs to be transferred across a national border, e.g. in order to fulfill contractual obligations, such as transferring the name, billing address and payment method when a cross-border purchase is transacted. Where permitted by local legislation, a replica of the data (either complete or partial) can be transferred across the border via a secure channel.  Data can be securely transferred over public internet with the use of TLS, or using a VPN connection established between the Virtual Private Gateway of the VPC and the Customer Gateway residing in-country.  Additionally, customers may establish private connectivity between AWS and an in-country datacenter by using AWS Direct Connect, which in many cases can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience compared to Internet-based connections.

Alternatively, it may be possible to achieve certain processing outcomes in the AWS cloud while employing data anonymization. This is a type of information sanitization whose intent is privacy protection, commonly associated with highly sensitive personal information. It is the process of either encrypting, tokenizing, or removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous in a particular context. Upon return of the processed dataset from the AWS cloud it could be integrated in to in-country databases to give it personal context again.

— Denis

PS – Customers should, of course, seek advice from professionals who are familiar with details of the country-specific legislation to ensure compliance with any applicable local laws, as this example architecture is shown here for illustrative purposes only!