Category: Security


AWS Encryption SDK: How to Decide if Data Key Caching Is Right for Your Application

AWS KMS image

Today, the AWS Crypto Tools team introduced a new feature in the AWS Encryption SDK: data key caching. Data key caching lets you reuse the data keys that protect your data, instead of generating a new data key for each encryption operation.

Data key caching can reduce latency, improve throughput, reduce cost, and help you stay within service limits as your application scales. In particular, caching might help if your application is hitting the AWS Key Management Service (KMS) requests-per-second limit and raising the limit does not solve the problem.

However, these benefits come with some security tradeoffs. Encryption best practices generally discourage extensive reuse of data keys.

In this blog post, I explore those tradeoffs and provide information that can help you decide whether data key caching is a good strategy for your application. I also explain how data key caching is implemented in the AWS Encryption SDK and describe the security thresholds that you can set to limit the reuse of data keys. Finally, I provide some practical examples of using the security thresholds to meet cost, performance, and security goals.

Introducing data key caching

The AWS Encryption SDK is a client-side encryption library that makes it easier for you to implement cryptography best practices in your application. It includes secure default behavior for developers who are not encryption experts, while being flexible enough to work for the most experienced users. (more…)

Newly Updated: Example AWS IAM Policies for You to Use and Customize

To help you grant access to specific resources and conditions, the Example Policies page in the AWS Identity and Access Management (IAM) documentation now includes more than thirty policies for you to use or customize to meet your permissions requirements. The AWS Support team developed these policies from their experiences working with AWS customers over the years. The example policies cover common permissions use cases you might encounter across services such as Amazon DynamoDB, Amazon EC2, AWS Elastic Beanstalk, Amazon RDS, Amazon S3, and IAM.

In this blog post, I introduce the updated Example Policies page and explain how to use and customize these policies for your needs.

The new Example Policies page

The Example Policies page in the IAM User Guide now provides an overview of the example policies and includes a link to view each policy on a separate page. Note that each of these policies has been reviewed and approved by AWS Support. If you would like to submit a policy that you have found to be particularly useful, post it on the IAM forum. (more…)

How to Monitor and Visualize Failed SSH Access Attempts to Amazon EC2 Linux Instances

As part of the AWS Shared Responsibility Model, you are responsible for monitoring and managing your resources at the operating system and application level. When you monitor your application servers, for example, you can measure, visualize, react to, and improve the security of those servers. You probably already do this on premises or in other environments, and you can adapt your existing processes, tools, and methodologies for use in the AWS Cloud. For more details about best practices for monitoring your AWS resources, see the “Manage Security Monitoring, Alerting, Audit Trail, and Incident Response” section in the AWS Security Best Practices whitepaper.

This blog post focuses on how to log and create alarms on invalid Secure Shell (SSH) access attempts. Implementing live monitoring and session recording facilitates the identification of unauthorized activity and can help confirm that remote users access only those systems they are authorized to use. With SSH log information in hand (such as invalid access type, bad private keys, and remote IP addresses), you can take proactive actions to protect your servers. For example, you can use an AWS Lambda function to adjust your server’s security rules when an alarm is triggered that indicates an invalid SSH access attempt.

In this post, I demonstrate how to use Amazon CloudWatch Logs to monitor SSH access to your application servers (Amazon EC2 Linux instances) so that you can monitor rejected SSH connection requests and take action. I also show how to configure CloudWatch Logs to send SSH access logs from application servers that reside in a public subnet. Last, I demonstrate how to visualize how many attempts are made to SSH into your application servers with bad private keys and invalid user names. Using these techniques and tools can help you improve the security of your application servers.

(more…)

How to Use AWS Organizations to Automate End-to-End Account Creation

AWS Organizations offers new capabilities for managing AWS accounts, including automated account creation via the Organizations API. For example, you can bring new development teams onboard by using the Organizations API to create an account, AWS CloudFormation templates to configure the account (such as for AWS Identity and Access Management [IAM] and networking), and service control policies (SCPs) to help enforce corporate policies.

In this blog post, I demonstrate the step-by-step process for end-to-end account creation in Organizations as well as how to automate the entire process. I also show how to move a new account into an organizational unit (OU).

Process overview

The following process flow diagram illustrates the steps required to create an account, configure the account, and then move it into an OU so that the account can take advantage of the centralized SCP functionality in Organizations. The tasks in the blue nodes occur in the master account in the organization in question, and the task in the orange node occurs in the new member account I create. In this post, I provide a script (in both Bash/CLI and Python) that you can use to automate this account creation process, and I walk through each step shown in the diagram to explain the process in detail. For the purposes of this post, I use the AWS CLI in combination with CloudFormation to create and configure an account. (more…)

AWS Adds 12 More Services to Its PCI DSS Compliance Program

Twelve more AWS services have obtained Payment Card Industry Data Security Standard (PCI DSS) compliance, giving you more options, flexibility, and functionality to process and store sensitive payment card data in the AWS Cloud. The services were audited by Coalfire to ensure that they meet strict PCI DSS standards.

The newly compliant AWS services are:

AWS now offers 42 services that meet PCI DSS standards, putting administrators in better control of their frameworks and making workloads more efficient and cost effective.

For more information about the AWS PCI DSS compliance program, see Compliance Resources, AWS Services in Scope by Compliance Program, and PCI DSS Compliance.

– Sara

How to Configure Even Stronger Password Policies to Help Meet Your Security Standards by Using AWS Directory Service for Microsoft Active Directory

With AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, you can now create and enforce custom password policies for your Microsoft Windows users. AWS Microsoft AD now includes five empty password policies that you can edit and apply with standard Microsoft password policy tools such as Active Directory Administrative Center (ADAC). With this capability, you are no longer limited to the default Windows password policy. Now, you can configure even stronger password policies and define lockout policies that specify when to lock out an account after login failures.

In this blog post, I demonstrate how to edit these new password policies to help you meet your security standards by using AWS Microsoft AD. I also introduce the password attributes you can modify and demonstrate how to apply password policies to user groups in your domain. (more…)

How to Increase the Redundancy and Performance of Your AWS Directory Service for Microsoft AD Directory by Adding Domain Controllers

You can now increase the redundancy and performance of your AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, directory by deploying additional domain controllers. Adding domain controllers increases redundancy, resulting in even greater resilience and higher availability. This new capability enables you to have at least two domain controllers operating, even if an Availability Zone were to be temporarily unavailable. The additional domain controllers also improve the performance of your applications by enabling directory clients to load-balance their requests across a larger number of domain controllers. For example, AWS Microsoft AD enables you to use larger fleets of Amazon EC2 instances to run .NET applications that perform frequent user attribute lookups.

AWS Microsoft AD is a highly available, managed Active Directory built on actual Microsoft Windows Server 2012 R2 in the AWS Cloud. When you create your AWS Microsoft AD directory, AWS deploys two domain controllers that are exclusively yours in separate Availability Zones for high availability. Now, you can deploy additional domain controllers easily via the Directory Service console or API, by specifying the total number of domain controllers that you want.

AWS Microsoft AD distributes the additional domain controllers across the Availability Zones and subnets within the Amazon VPC where your directory is running. AWS deploys the domain controllers, configures them to replicate directory changes, monitors for and repairs any issues, performs daily snapshots, and updates the domain controllers with patches. This reduces the effort and complexity of creating and managing your own domain controllers in the AWS Cloud.

In this blog post, I create an AWS Microsoft AD directory with two domain controllers in each Availability Zone. This ensures that I always have at least two domain controllers operating, even if an entire Availability Zone were to be temporarily unavailable. To accomplish this, first I create an AWS Microsoft AD directory with one domain controller per Availability Zone, and then I deploy one additional domain controller per Availability Zone.

Solution architecture

The following diagram shows how AWS Microsoft AD deploys all the domain controllers in this solution after you complete Steps 1 and 2. In Step 1, AWS Microsoft AD deploys the two required domain controllers across multiple Availability Zones and subnets in an Amazon VPC. In Step 2, AWS Microsoft AD deploys one additional domain controller per Availability Zone and subnet. (more…)

New Security Whitepaper Now Available: Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities

Whitepaper image

Today, we released a new security whitepaper: Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities. This whitepaper describes how you can use AWS WAF, a web application firewall, to address the top application security flaws as named by the Open Web Application Security Project (OWASP). Using AWS WAF, you can write rules to match patterns of exploitation attempts in HTTP requests and block requests from reaching your web servers. This whitepaper discusses manifestations of these security vulnerabilities, AWS WAF–based mitigation strategies, and other AWS services or solutions that can help address these threats.

– Vlad

New Information in the AWS IAM Console Helps You Follow IAM Best Practices

Today, we added new information to the Users section of the AWS Identity and Access Management (IAM) console to make it easier for you to follow IAM best practices. With this new information, you can more easily monitor users’ activity in your AWS account and identify access keys and passwords that you should rotate regularly. You can also better audit users’ MFA device usage and keep track of their group memberships. In this post, I show how you can use this new information to help you follow IAM best practices.

Monitor activity in your AWS account

The IAM best practice, monitor activity in your AWS account, encourages you to monitor user activity in your AWS account by using services such as AWS CloudTrail and AWS Config. In addition to monitoring usage in your AWS account, you should be aware of inactive users so that you can remove them from your account. By only retaining necessary users, you can help maintain the security of your AWS account. (more…)

How to Facilitate Data Analysis and Fulfill Security Requirements by Using Centralized Flow Log Data

I am an AWS Professional Services consultant, which has me working directly with AWS customers on a daily basis. One of my customers recently asked me to provide a solution to help them fulfill their security requirements by having the flow log data from VPC Flow Logs sent to a central AWS account. This is a common requirement in some companies so that logs can be available for in-depth analysis. In addition, my customers regularly request a simple, scalable, and serverless solution that doesn’t require them to create and maintain custom code.

In this blog post, I demonstrate how to configure your AWS accounts to send flow log data from VPC Flow Logs to an Amazon S3 bucket located in a central AWS account by using only fully managed AWS services. The benefit of using fully managed services is that you can lower or even completely eliminate operational costs because AWS manages the resources and scales the resources automatically.

Solution overview

The solution in this post uses VPC Flow Logs, which is configured in a source account to send flow logs to an Amazon CloudWatch Logs log group. To receive the logs from multiple accounts, this solution uses a CloudWatch Logs destination in the central account. Finally, the solution utilizes fully managed Amazon Kinesis Firehose, which delivers streaming data to scalable and durable S3 object storage automatically without the need to write custom applications or manage resources. When the logs are processed and stored in an S3 bucket, these can be tiered into a lower cost, long-term storage solution (such as Amazon Glacier) automatically to help meet any company-specific or industry-specific requirements for data retention. (more…)