AWS Blog Official Blog of Amazon Web Services Wed, 16 Aug 2017 20:55:30 +0000 en-US hourly 1 New – VPC Endpoints for DynamoDB Wed, 16 Aug 2017 20:48:05 +0000 592047d5863396753a34be8c0520c68c4b079672 Starting today (VPC) Endpoints for are available in all public AWS regions. You can provision an endpoint right away using the or the . There are no additional costs for a VPC Endpoint for DynamoDB. Many AWS customers run their applications within a (VPC) for security or isolation reasons. Previously, if you wanted your EC2 […] <p>Starting today <a href="" title="">Amazon Virtual Private Cloud</a> (VPC) Endpoints for <a href="" title="">Amazon DynamoDB</a> are available in all public AWS regions. You can provision an endpoint right away using the <a href="" title="">AWS Management Console</a> or the <a href="" title="">AWS Command Line Interface (CLI)</a>. There are no additional costs for a VPC Endpoint for DynamoDB.</p> <p>Many AWS customers run their applications within a <a href="" title="">Amazon Virtual Private Cloud</a> (VPC) for security or isolation reasons. Previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options. You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs) or you could route all of your traffic to your local infrastructure via VPN or <a href="" title="">AWS Direct Connect</a> and then back to DynamoDB. Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB. Here is a picture of the old infrastructure.<br /> <img class="aligncenter size-large wp-image-20514" style="border: 1px solid black" src="" alt="" width="640" height="351" /></p> <h3>Creating an Endpoint</h3> <p>Let’s create a VPC Endpoint for DynamoDB. We can make sure our region supports the endpoint with the <a title="undefined" href="" target="_blank" rel="noopener noreferrer">DescribeVpcEndpointServices</a> API call.</p> <pre><code class="lang-bash"> <b>aws ec2 describe-vpc-endpoint-services --region us-east-1</b> { &quot;ServiceNames&quot;: [ &quot;;, &quot;; ] } </code></pre> <p>Great, so I know my region supports these endpoints and I know what my regional endpoint is. I can grab one of my VPCs and provision an endpoint with a quick call to the CLI or through the console. Let me show you how to use the console.</p> <p>First I’ll navigate to the VPC console and select “Endpoints” in the sidebar. From there I’ll click “Create Endpoint” which brings me to this handy console.</p> <p><img class="aligncenter size-large wp-image-20546" style="border: 1px solid black" src="" alt="" width="640" height="616" /></p> <p>You’ll notice the <a href="" title="">AWS Identity and Access Management (IAM)</a> policy section for the endpoint. This supports all of the <a href="">fine grained access control</a> that DynamoDB supports in regular IAM policies and you can restrict access based on IAM policy conditions.</p> <p>For now I’ll give full access to my instances within this VPC and click “Next Step”.</p> <p><img class="aligncenter size-large wp-image-20547" style="border: 1px solid black" src="" alt="" width="640" height="448" /></p> <p>This brings me to a list of route tables in my VPC and asks me which of these route tables I want to assign my endpoint to. I’ll select one of them and click “Create Endpoint”.</p> <p>Keep in mind the note of warning in the console: if you have source restrictions to DynamoDB based on public IP addresses the source IP of your instances accessing DynamoDB will now be their private IP addresses.</p> <p>After adding the VPC Endpoint for DynamoDB to our VPC our infrastructure looks like this.</p> <p><img class="aligncenter size-large wp-image-20515" style="border: 1px solid black" src="" alt="" width="640" height="353" /></p> <p>That’s it folks! It’s that easy. It’s provided at no cost. Go ahead and start using it today. If you need more details you can read the <a href="">docs here</a>.</p> AWS Partner Webinar Series – August 2017 Tue, 15 Aug 2017 21:59:00 +0000 91cd8a29ef5924254d4e60c9add75a90a57d5596 We love bringing our customers helpful information and we have another cool series we are excited to tell you about. The AWS Partner Webinar Series is a selection of live and recorded presentations covering a broad range of topics at varying technical levels and scale. A little different from our AWS Online TechTalks, each AWS […] <p>We love bringing our customers helpful information and we have another cool series we are excited to tell you about. The <a href="">AWS Partner Webinar Series</a> is a selection of live and recorded presentations covering a broad range of topics at varying technical levels and scale. A little different from our <a href="">AWS Online TechTalks</a>, each AWS Partner Webinar is hosted by an AWS solutions architect and an <a href="">AWS Competency Partner</a> who has successfully helped customers evaluate and implement the tools, techniques, and technologies of AWS.</p> <p><img class="aligncenter size-large wp-image-20520" src="" alt="" width="640" height="181" /></p> <p>Check out this month’s webinars and let us know which ones you found the most helpful! All schedule times are shown in the Pacific Time (PDT) time zone.</p> <p><span style="text-decoration: underline"><strong>Security Webinars</strong></span></p> <p><strong><span style="color: #000080">Sophos</span></strong><br /> <a href=";sc_campaign=AWS_Partner_namer_WEW_awspartner_ipc_20170623&amp;sc_publisher=aws&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001HlnQ&amp;trk=aws_website_blog">Seeing More Clearly: ATLO Software Secures Online Training Solutions for Correctional Facilities with SophosUTM on AWS Link.</a><br /> August 17th, 2017 | 10:00 AM PDT</p> <p><strong><span style="color: #000080">F5</span></strong><br /> <a href=";sc_campaign=AWS_Partner_namer_WEW_awspartner_ipc_20170623&amp;sc_publisher=aws&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001Hlww&amp;trk=aws_website_blog">F5 on AWS: How MailControl Improved their Application Visibility and Security</a><br /> August 23, 2017 | 10:00 AM PDT</p> <p><span style="text-decoration: underline"><strong>Big Data Webinars</strong></span></p> <p><strong><span style="color: #000080">Tableau, Matillion, 47Lining, NorthBay</span></strong><br /> <a href=";sc_campaign=AWS_Partner_namer_WEW_awspartner_ipc_20170705&amp;sc_publisher=aws&amp;sc_content=awspartner_wew_partnermkt&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001I8l7&amp;trk=aws_website_blog">Unlock Insights and Reduce Costs by Modernizing Your Data Warehouse on AWS</a><br /> August 22, 2017 | 10:00 AM PDT</p> <p><span style="text-decoration: underline"><strong>Storage Webinars</strong></span></p> <p><span style="color: #000080"><strong>StorReduce</strong></span><br /> <a href=";sc_campaign={{}}&amp;sc_publisher=aws&amp;sc_content=awspartner_wew_partnermkt&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001IJcU&amp;trk=aws_website_blog">How Globe Telecom does Primary Backups via StorReduce to the AWS Cloud</a><br /> August 29, 2017 | 8:00 AM PDT</p> <p><span style="color: #000080"><strong>Commvault</strong></span><br /> <a href=";sc_campaign={{}}&amp;sc_publisher=aws&amp;sc_content=awspartner_wew_partnermkt&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001IIXO&amp;trk=aws_website_blog">Moving Forward Faster: How Monash University Automated Data Movement for 3500 Virtual Machines to AWS with Commvault</a><br /> August 29, 2017 | 1:00 PM PDT</p> <p><span style="color: #000080"><strong>Dell EMC</strong></span><br /> <a href=";sc_campaign=AWS_Partner_namer_WEBINAR_awspartner_ipc_20170714&amp;sc_publisher=aws&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001IFcw&amp;trk=aws_website_blog">Moving Forward Faster: Protect Your Workloads on AWS With Increased Scale and Performance</a><br /> August 30, 2017 | 11:00 AM PDT</p> <p><span style="color: #000080"><strong>Druva</strong></span><br /> <a href=";sc_campaign=AWS_Partner_namer_WEW_awspartner_ipc_20170714&amp;sc_publisher=aws&amp;sc_content=awspartner_wew_partnermkt&amp;sc_country=us&amp;sc_geo=namer&amp;sc_category=mult&amp;sc_outcome=awspartner&amp;source=em_70138000001IFch&amp;trk=aws_website_blog">How Hatco Protects Against Ransomware with Druva on AWS</a><br /> September 13, 2017 | 10:00 AM PDT</p> AWS Summit New York – Summary of Announcements Mon, 14 Aug 2017 22:16:07 +0000 c200bfbc759305a5cc3b833cfb8539ecc050c0c0 Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started: Amazon Macie – This new service helps you to discover, classify, and secure […] <p>Whew – what a week! <a href="">Tara</a>, <a href="">Randall</a>, <a href="">Ana</a>, and I have been working around the clock to create blog posts for the announcements that we made at the <a href="">AWS Summit in New York</a>. Here’s a summary to help you to get started:</p> <p><a href=""><strong>Amazon Macie</strong></a> – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), <a href="">Macie</a> looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read <a href="">Tara’s post</a> to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.</p> <p><a href=""><strong>AWS Glue</strong></a> – <a href="">Randall’s post</a> (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. <a href="">Glue</a> is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that <a href="">Amazon Athena now integrates with Amazon Glue</a>, as does <a href="">Apache Spark and Hive on Amazon EMR</a>.</p> <p><a href=""><strong>AWS Migration Hub</strong></a> – This new service will help you to migrate your application portfolio to AWS. <a href="">My post</a> outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the <a href="">Server Migration Service</a> and the <a href="">Database Migration Service</a>.</p> <p><a href=""><strong>CloudHSM Update</strong></a> – We made a major upgrade to <a href="" title="">AWS CloudHSM</a>, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the <a href="" title="">AWS Management Console</a>, <a href="" title="">AWS Command Line Interface (CLI)</a>, and through API calls. Read <a href="">my post</a> to learn more and to see how to set up a CloudHSM cluster.</p> <p><a href=""><strong>Managed Rules to Secure S3 Buckets</strong></a> – We added two new rules to AWS Config that will help you to secure your S3 buckets. The <strong>s3-bucket-public-write-prohibited</strong> rule identifies buckets that have public write access and the <strong>s3-bucket-public-read-prohibited</strong> rule identifies buckets that have global read access. As I noted in <a href="">my post</a>, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.</p> <p><a href=""><strong>CloudTrail for All Customers</strong></a> – Tara’s <a href="">post</a> revealed that <a href="" title="">AWS CloudTrail</a> is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.</p> <p><a href=""><strong>Encryption of Data at Rest for EFS</strong></a> – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. <a href="">My post</a> shows you how to select a key and to verify that it is being used.</p> <p><span style="text-decoration: underline"><strong>Watch the Keynote</strong></span><br /> My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the <a href=";">video</a>:</p> <p><a href=";"><img class="aligncenter size-medium" src="" width="676" height="385" /></a></p> <p>— <a href="">Jeff</a>;</p> <p>&nbsp;</p> Launch – Hello Amazon Macie: Automatically Discover, Classify, and Secure Content at Scale Mon, 14 Aug 2017 16:18:30 +0000 c48b99731437e86beb8646065aca8ff088b5c1b8 When Jeff and I heard about this service, we both were curious on the meaning of the name Macie. Of course, Jeff being a great researcher looked up the name Macie and found that the name Macie has two meanings. It has both French and English (UK) based origin, it is typically a girl name, […] <p>When Jeff and I heard about this service, we both were curious on the meaning of the name Macie. Of course, Jeff being a great researcher looked up the name Macie and found that the name Macie has two meanings. It has both French and English (UK) based origin, it is typically a girl name, has various meanings. The first meaning of Macie that was found, said that that name meant “weapon”. &nbsp;The second meaning noted the name was representative of a person that is bold, sporty, and sweet. In a way, these definitions are appropriate, as today I am happy to announce that we are launching &nbsp;<strong>Amazon Macie</strong>, a new security service that uses machine learning to help identify and protect sensitive data stored in AWS from breaches, data leaks, and unauthorized access with <a href="" title="">Amazon Simple Storage Service (S3)</a> being the initial data store. Therefore, I can imagine that <strong>Amazon Macie</strong> could be described as a bold, weapon for AWS customers providing a sweet service with a sporty user interface that helps to protects against malicious access of your data at rest. Whew, that was a mouthful, but I unbelievably got all the Macie descriptions out in a single sentence! Nevertheless, I am a thrilled to share with you the power of the new <strong>Amazon Macie</strong> service.</p> <p><img class="aligncenter size-full" src=" Launch-01-Header-wordmark-sm.png" width="650" height="119" /></p> <p><strong>Amazon Macie</strong> is a service powered by machine learning that can automatically discover and classify your data stored in Amazon S3. But Macie doesn’t stop there, once your data has been classified by Macie, it assigns each data item a business value, and then continuously monitors the data in order to detect any suspicious activity based upon access patterns. Key features of the Macie service include:</p> <ul> <li><strong>Data Security Automation:</strong> analyzes, classifies, and processes data to understand the historical patterns, user authentications to data, data access locations, and times of access.</li> <li><strong>Data Security &amp; Monitoring:</strong> actively monitors usage log data for anomaly detected along with automatic resolution of reported issues through CloudWatch Events and Lambda</li> <li><strong>Data Visibility for Proactive Loss prevention: </strong>Provides management visibility into details of storage data while providing immediate protection without the need for manual customer input</li> <li><strong>Data Research and Reporting: </strong>allows administrative configuration for reporting and alert management requirements</li> </ul> <p><strong>How does Amazon Macie accomplish this you ask?&nbsp; </strong></p> <p>Using machine learning algorithms for natural language processing (NLP), Macie can automate the classification of data in your S3 buckets. In addition, Amazon Macie takes advantage of predictive analytics algorithms enabling data access patterns to be dynamically analyzed. Learnings are then used to inform and to alert you on possible suspicious behavior. Macie also runs an engine specifically to detect common sources of personally identifiable information (PII), or sensitive personal information (SP).&nbsp; Macie takes advantage of AWS CloudTrail and continuously checks Cloudtrail events for PUT requests in S3 buckets and automatically classify new objects in almost real time.</p> <p>While Macie is a powerful tool to use for security and data protection in the AWS cloud, it also can aid you with governance, compliance requirements, and/or audit standards. &nbsp;Many of you may already be aware of the EU’s most stringent privacy regulation to date – The General Protection Data Regulation (GDPR), which becomes enforceable on May 25, 2018. As Amazon Macie recognizes personally identifiable information (PII) and provides customers with dashboards and alerts, it will enable customers to comply with GDPR regulations around encryption and pseudonymization of data. When combined with Lambda queries, Macie becomes a powerful tool to help remediate GDPR concerns.</p> <p><strong><u>Tour of the Amazon Macie Service</u></strong></p> <p>Let’s look a tour of the service and look at <strong>Amazon Macie</strong> up close and personal.</p> <p>First, I will log onto the Macie console and start the process of setting up Macie so that I can start to my data classification and protection by clicking the <strong>Get Started</strong> button.</p> <p><img class="aligncenter size-full" src=" Launch-02-MacieConsole.png" width="1399" height="573" /><br /> As you can see, to enable the <a href=""><strong>Amazon Macie</strong></a> service, I must have the appropriate<strong> IAM </strong>roles created for the service, and additionally I will need to have <a href=""><strong>AWS CloudTrail</strong></a> enabled in my account.</p> <p><img class="aligncenter size-full" src=" Launch-03-EnableMacieError.png" width="1394" height="650" /></p> <p>I will create these roles and turn on the <a href="">AWS CloudTrail</a> service in my account. To make things easier for you to setup Macie, you can take advantage of sample template for CloudFormation provided in the <a href="">Macie User Guide</a> that will set up required IAM roles and policies for you, you then would only need to setup a trail as noted in the <a href="">CloudTrail documentation</a>.</p> <p>If you have multiple AWS accounts, you should note that the account you use to enable the Macie service will be noted as the master account, you can integrate other accounts with the Macie service but they will have the member account designation. Users from member accounts will need to use an IAM role to federate access to the master account in order access the Macie console.</p> <p>Now that my IAM roles are created and CloudTrail is enabled, I will click the <strong>Enable Macie</strong> button to start Macie’s data monitoring and protection.</p> <p><img class="aligncenter size-full" src=" Launch-04-EnableMacie.png" width="1393" height="658" /><img class="aligncenter size-full" src=" Launch-05-EnableMacieStandBy.png" width="1394" height="393" /><br /> Once Macie is finished starting the service in your account, you will be brought to the service main screen and any existing alerts in your account will be presented to you. Since I have just started the service, I currently have no existing alerts at this time.</p> <p><img class="alignnone size-full" src=" Launch-06-MacieStart.png" width="1527" height="678" /><br /> Considering we are doing a tour of the Macie service, I will now integrate some of my S3 buckets with Macie. However, you <strong>do not have to specify</strong> any S3 buckets for Macie to start monitoring since the service already uses the <strong><a href="">AWS CloudTrail</a></strong>&nbsp;Management API analyze and process information. With this tour of Macie, I have decided to monitor some object level API events in from certain buckets in CloudTrail.</p> <p>In order to integrate with S3, I will go to the <strong>Integrations</strong> tab of the Macie console.&nbsp; Once on the <strong>Integrations</strong> tab, I will see two options: <strong>Accounts</strong> and <strong>Services. The Account </strong>option is used to integrate member accounts with Macie and to set your data retention policy. Since I want to integrate specific S3 buckets with Macie, I’ll click the <strong>Services</strong> option go to the <strong>Services</strong> tab.</p> <p><img class="aligncenter size-full" src=" Launch-07-Integrations.png" width="1544" height="726" /><br /> When I integrate Macie with the S3 service, a trail and a S3 bucket will be created to store logs about S3 data events. To get started, I will use the <strong>Select an account</strong> drop down to choose an account. &nbsp;Once my account is selected, the services available for integration are presented. I’ll select the <strong>Amazon S3</strong> service by clicking the <strong>Add</strong> button.</p> <p><img class="aligncenter size-full" src=" Launch-08-Integrations-Services.png" width="1543" height="597" /><img class="aligncenter size-full" src=" Launch-09-Services-S3.png" width="1553" height="531" /></p> <p>Now I can select the buckets that I want Macie to analyze, selecting the <strong>Review and Save</strong> button takes me to a screen which I confirm that I desire object level logging by clicking <strong>Save</strong> button.</p> <p><img class="aligncenter size-full" src=" Launch-10-Services-S3Buckets.png" width="1591" height="808" /></p> <p><img class="aligncenter size-full" src=" Launch-11-Services-S3Logging.png" width="1544" height="672" /></p> <p><img class="aligncenter size-full" src=" Launch-12-Services-S3Confirmation.png" width="1548" height="634" />4<br /> Next, on our Macie tour, let’s look at how we can customize data classification with Macie.</p> <p>As we discussed, Macie will automatically monitor and classify your data. Once Macie identifies your data it will classify your data objects by file and content type. Macie will also use a support vector machine (SVM) classifier to classify the content within S3 objects in addition to the metadata of the file. In deep learning/machine learning fields of study, support vector machines are supervised learning models, which have learning algorithms used for classification and regression analysis of data. Macie trained the SVM classifier by using a data of varying content types optimized to support accurate detection of data content even including the source code you may write.</p> <p>Macie will assign only one content type per data object or file, however, you have the ability to enable or disable content type and file extensions in order to include or exclude them from the Macie service classifying these objects. Once Macie classifies the data, it will assign risk level of the object between 1 and 10 with 10 being the highest risk and 1 being the lowest data risk level.</p> <p>To customize the classification of our data with Macie, I’ll go to the <strong>Settings</strong> Tab. I am now presented with the choices available to enable or disable the Macie classifications settings.</p> <p><img class="aligncenter size-full" src=" Launch-13-Settings-Classify.png" width="1458" height="728" /><br /> For an example during our tour of Macie, I will choose <strong>File extension</strong>. When presented with the list of file extensions that Macie tracks and uses for classifications.</p> <p><img class="aligncenter size-full" src=" Launch-14-Settings-EditClassify.png" width="1550" height="771" /></p> <p>As a test, I’ll edit the <strong>apk </strong>file extension for <strong>Android application install file,</strong> and disable monitoring of this file by selecting <b>No – disabled</b> from the dropdown and clicking the Save button. Of course, later I will turn this back on since I want to keep my entire collection of data files safe including my Android development binaries.</p> <p><img class="aligncenter size-full" src=" Launch-15-ChangeClassify.png" width="1519" height="740" /><br /> One last thing I want to note about data classification using Macie is that the service provides visibility in how you data object are being classified and highlights data assets that you have stored regarding how critical or important the information for compliance, for your personal data and for your business.</p> <p>Now that we have explored the data that Macie classifies and monitors, the last stop on our service tour is the Macie dashboard.</p> <p><img class="aligncenter size-full" src=" Launch-16-MacieDashboard.png" width="1549" height="850" /></p> <p>&nbsp;</p> <p>The <strong>Macie Dashboard</strong> provides us with a complete picture of all of the data and activity that has been gathered as Macie monitors and classifies our data. The dashboard displays Metrics and Views grouped by categories to provide different visual&nbsp;perspectives of your data. Within these dashboard screens, you also you can go from a metric perspective directly to the <strong>Research</strong> tab to build and run queries based on the metric. These queries can be used to set up customized alerts for notification of any possible security issues or problems. We won’t have an opportunity to tour the <strong>Research</strong> or <strong>Alerts</strong> tab, but you can find out more information about these features in the Macie <a href="">user guide</a>.</p> <p>Turning back to the <strong>Dashboard</strong>, there are so many great resources in the Macie Dashboard that we will not be able to stop at each view, metric, and feature during our tour, so let me give you an overview of all the features of the dashboard that you can take advantage of using.</p> <p><strong>Dashboard Metrics </strong><strong>– </strong>monitored data grouped by the following categories:</p> <ul> <li><strong>High-risk S3 objects:</strong> data objects with risk levels of 8 through 10.</li> <li><strong>Total event occurrences:</strong> – total count of all event occurrences since Macie was enabled</li> <li><strong>Total user sessions</strong> – 5-minute snapshot of CloudTrail data</li> </ul> <p><img class="aligncenter size-full" src=" Launch-18-MetricViewDetail.png" width="1544" height="896" /></p> <p><strong>Dashboard Views – </strong>views to display various points of the monitored data and activity:</p> <ul> <li>S3 objects for a selected time range</li> <li>S3 objects</li> <li>S3 objects by personally identifiable information (PII)</li> <li>S3 objects by ACL</li> <li>CloudTrail events and associated users</li> <li>CloudTrail errors and associated users</li> <li>Activity location</li> <li>AWS CLoudTrail events</li> <li>Activity ISPs</li> <li>AWS CloudTrail user identity types</li> </ul> <p><img class="aligncenter size-full" src=" Launch-19-S3ObjectView.png" width="1514" height="739" /></p> <p><img class="aligncenter size-full" src=" Launch-20-CloudTrailRiskView.png" width="1495" height="909" /></p> <p><strong><u>Summary</u></strong></p> <p>Well, that concludes our tour of the new and exciting <strong><a href="">Amazon Macie</a></strong> service. Amazon Macie is a sensational new service that uses the power of machine learning and deep learning to aid you in securing, identifying, and protecting your data stored in Amazon S3. Using natural language processing (NLP) to automate data classification, <strong><a href="">Amazon Macie</a></strong> enables you to easily get started with high accuracy classification and immediate protection of your data by simply enabling the service.&nbsp; The interactive dashboards give visibility to the where, what, who, and when of your information allowing you to proactively analyze massive streams of data, data accesses, and API calls in your environment. Learn more about <strong>Amazon Macie</strong> by visiting the <a href="">product page</a> or the documentation in the <strong>Amazon Macie</strong> <a href="">user guide</a>.</p> <p>– <a href="">Tara</a></p> New – Encryption of Data at Rest for Amazon Elastic File System (EFS) Mon, 14 Aug 2017 16:04:01 +0000 5e358326130bb746c022eb4c24a89a424cb2da89 We launched in production form a little over a year ago (see Amazon Elastic File System – Production Ready in Three Regions for more information). Later in the year we added On-Premises access via Direct Connect and made EFS available in the Region, following up this year with availability in the and Regions. Encryption at […] <p>We launched <a href="" title="">Amazon Elastic File System</a> in production form a little over a year ago (see <a href="">Amazon Elastic File System – Production Ready in Three Regions</a> for more information). Later in the year we added <a href="">On-Premises access via Direct Connect</a> and made EFS available in the <span title="">US East (Ohio)</span> Region, following up this year with availability in the <span title="">EU (Frankfurt)</span> and <span title="">Asia Pacific (Sydney)</span> Regions.</p> <p><span style="text-decoration: underline"><strong>Encryption at Rest</strong></span><br /> Today we are adding support for encryption of data at rest. When you create a new file system, you can select a key that will be used to encrypt the contents of the files that you store on the file system. The key can be a built-in key that is managed by AWS or a key that you created yourself using <a href="" title="">AWS Key Management Service (KMS)</a>. File metadata (file names, directory names, and directory contents) will be encrypted using a key managed by AWS. Both forms of encryption are implemented using an industry-standard AES-256 algorithm.</p> <p>You can set this up in seconds when you create a new file system. You simply choose the built-in key (<strong>aws/elasticfilesystem</strong>) or one of your own:</p> <p><img class="aligncenter size-medium" src="" width="892" height="342" /></p> <p>EFS will take care of the rest! You can select the filesystem in the console to verify that it is encrypted as desired:</p> <p><img class="aligncenter size-medium" src="" width="753" height="433" /></p> <p>A cryptographic algorithm that meets the approval of FIPS 140-2 is used to encrypt data and metadata. The encryption is transparent and has a minimal effect on overall performance.</p> <p>You can use <a href="" title="">AWS Identity and Access Management (IAM)</a> to control access to the Customer Master Key (CMK). The CMK must be enabled in order to grant access to the file system; disabling the key prevents it from being used to create new file systems and blocks access (after a period of time) to existing file systems that it protects. To learn more about your options, read <a href="">Managing Access to Encrypted File Systems</a>.</p> <p><span style="text-decoration: underline"><strong>Available Now</strong></span><br /> Encryption of data at rest is available now in all regions where EFS is supported, at no additional charge.</p> <p>— <a href="">Jeff</a>;</p> <p>&nbsp;</p> Launch – AWS Glue Now Generally Available Mon, 14 Aug 2017 15:49:19 +0000 32d8528189c3bfa4207df375aefaa07f1fdc850e Today we’re excited to announce the general availability of . is a fully managed, serverless, and cloud-optimized extract, transform and load (ETL) service. is different from other ETL services and platforms in a few very important ways. First, is “serverless” – you don’t need to provision or manage any resources and you only pay for […] <p>Today we’re excited to announce the general availability of <a href="" title="AWS Glue">AWS Glue</a>. <span title="">Glue</span> is a fully managed, serverless, and cloud-optimized extract, transform and load (ETL) service. <span title="">Glue</span> is different from other ETL services and platforms in a few very important ways.</p> <p>First, <span title="">Glue</span> is “serverless” – you don’t need to provision or manage any resources and you only pay for resources when <span title="">Glue</span> is actively running. Second, <span title="">Glue</span> provides crawlers that can automatically detect and infer schemas from many data sources, data types, and across various types of partitions. It stores these generated schemas in a centralized Data Catalog for editing, versioning, querying, and analysis. Third, <span title="">Glue</span> can automatically generate ETL scripts (in Python!) to translate your data from your source formats to your target formats. Finally, <span title="">Glue</span> allows you to create development endpoints that allow your developers to use their favorite toolchains to construct their ETL scripts. Ok, let’s dive deep with an example.</p> <p>In my job as a Developer Evangelist I spend a lot of time traveling and I thought it would be cool to play with some flight data. The <a href="">Bureau of Transportations Statistics</a> is kind enough to share all of this data for anyone to use <a href="">here</a>. We can easily download this data and put it in an <a href="" title="">Amazon Simple Storage Service (S3)</a> bucket. This data will be the basis of our work today.</p> <h3>Crawlers</h3> <p><img class="aligncenter size-full" style="border: 1px solid black" src="" /></p> <p>First, we need to create a Crawler for our flights data from S3. We’ll select Crawlers in the Glue console and follow the on screen prompts from there. I’ll specify <code>s3://crawler-public-us-east-1/flight/2016/csv/</code> as my first datasource (we can add more later if needed). Next, we’ll create a database called flights and give our tables a prefix of flights as well.</p> <p>The Crawler will go over our dataset, detect partitions through various folders – in this case months of the year, detect the schema, and build a table. We could add additonal data sources and jobs into our crawler or create separate crawlers that push data into the same database but for now let’s look at the autogenerated schema.</p> <p><img class="aligncenter size-large wp-image-20429" style="border: 1px solid black" src="" alt="" width="640" height="380" /></p> <p>I’m going to make a quick schema change to year, moving it from BIGINT to INT. Then I can compare the two versions of the schema if needed.<br /> <img class="aligncenter size-full" style="border: 1px solid black" src="" /></p> <p>Now that we know how to correctly parse this data let’s go ahead and do some transforms.</p> <h3>ETL Jobs</h3> <p>Now we’ll navigate to the Jobs subconsole and click Add Job. Will follow the prompts from there giving our job a name, selecting a datasource, and an S3 location for temporary files. Next we add our target by specifying “Create tables in your data target” and we’ll specify an S3 location in Parquet format as our target.<br /> <img class="aligncenter size-full" style="border: 1px solid black" src="" alt="" /></p> <p>After clicking next, we’re at screen showing our various mappings proposed by <span title="">Glue</span>. Now we can make manual column adjustments as needed – in this case we’re just going to use the X button to remove a few columns that we don’t need.</p> <p><img class="aligncenter size-large wp-image-20418" style="border: 1px solid black" src="" alt="" width="640" height="344" /></p> <p>This brings us to my favorite part. This is what I absolutely love about <span title="">Glue</span>.</p> <p><img class="aligncenter size-large wp-image-20419" style="border: 1px solid black" src="" alt="" width="640" height="343" /></p> <p><span title="">Glue</span> generated a PySpark script to transform our data based on the information we’ve given it so far. On the left hand side we can see a diagram documenting the flow of the ETL job. On the top right we see a series of buttons that we can use to add annotated data sources and targets, transforms, spigots, and other features. This is the interface I get if I click on transform.</p> <p><img class="aligncenter size-full wp-image-20421" style="border: 1px solid black" src="" alt="" width="800" height="892" /></p> <p>If we add any of these transforms or additional data sources, <span title="">Glue</span> will update the diagram on the left giving us a useful visualization of the flow of our data. We can also just write our own code into the console and have it run. We can add triggers to this job that fire on completion of another job, a schedule, or on demand. That way if we add more flight data we can reload this same data back into S3 in the format we need.</p> <aside> I’m going to take a second here to point out why I think having your ETL jobs stored as code is so powerful. Code is portable, testable, versionable, and human readable. Many of us write code everyday. We’re familiar with it and we can manipulate it easily. <span title="">Glue</span> saves me, as a developer, countless hours of messing around with setup and lets me get to the code that matters. </aside> <p>I could spend all day writing about the power and versatility of the jobs console but <span title="">Glue</span> still has more features I want to cover. So, while I might love the script editing console, I know many people prefer their own development environments, tools, and IDEs. Let’s figure out how we can use those with <span title="">Glue</span>.</p> <h3>Development Endpoints and Notebooks</h3> <p>A Development Endpoint is an environment used to develop and test our <span title="">Glue</span> scripts. If we navigate to “Dev endpoints” in the <span title="">Glue</span> console we can click “Add endpoint” in the top right to get started. Next we’ll select a VPC, a security group that references itself and then we wait for it to provision.</p> <p><img class="aligncenter size-full" style="border: 1px solid black" src="" /><br /> <img class="aligncenter size-full" style="border: 1px solid black" src="" /></p> <p>Once it’s provisioned we can create an <a href="">Apache Zeppelin</a> notebook server by going to actions and clicking create notebook server. We give our instance an IAM role and make sure it has permissions to talk to our data sources. Then we can either SSH into the server or connect to the notebook to interactively develop our script.</p> <p><img class="aligncenter size-full wp-image-20425" style="border: 1px solid black" src="" alt="" width="800" height="313" /></p> <h3>Pricing and Documentation</h3> <p>You can see detailed pricing information <a href="">here</a>. Glue crawlers, ETL jobs, and development endpoints are all billed in Data Processing Unit Hours (DPU) (billed by minute). Each DPU-Hour costs $0.44 in us-east-1. A single DPU provides 4vCPU and 16GB of memory.</p> <p>We’ve only covered about half of the features that <span title="">Glue</span> has so I want to encourage everyone who made it this far into the post to go read the <a href="">documentation</a> and <a href="">service FAQs</a>. <span title="">Glue</span> also has a rich and powerful <a href="">API</a> that allows you to do anything console can do and more.</p> <p>We’re also releasing two new projects today. The <a href="">aws-glue-libs</a> provide a set of utilities for connecting, and talking with <span title="">Glue</span>. The <a href="">aws-glue-samples</a> repo contains a set of example jobs.</p> <p>I hope you find that using <span title="">Glue</span> reduces the time it takes to start doing things with your data. Look for another post from me on <a href="" title="AWS Glue">AWS Glue</a> soon because I can’t stop playing with this new service.<br /> – <a href="">Randall</a></p> AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads Mon, 14 Aug 2017 15:38:59 +0000 64e51f4478236a12c123e7621e4a9ffc5ea37608 Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, supports encryption of data at rest and in transit, […] <p>Our <a href="">customers</a> run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our <a href="">Overview of Security Processes</a> document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, <a href="" title="">Amazon Relational Database Service (RDS)</a> supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).</p> <p>Many customers use <a href="" title="">AWS Key Management Service (KMS)</a> to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by <a href="" title="">AWS CloudHSM</a> to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, <a href="">AWS CloudHSM – Secure Key Storage and Cryptographic Operations</a>, to learn more about Hardware Security Modules, also known as HSMs).</p> <p><span style="text-decoration: underline"><strong>Major CloudHSM Update</strong></span><br /> Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:</p> <p><strong>Pay As You Go</strong> – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.</p> <p><strong>Fully Managed</strong> – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in <a href="" title="">Amazon Simple Storage Service (S3)</a>, and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.</p> <p><strong>Open &amp; Compatible&nbsp;</strong> – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as <a href="">PKCS #11</a>, <a href="">Java Cryptography Extension</a> (JCE), and Microsoft <a href="">CryptoNG</a> (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.</p> <p><strong>More Secure</strong> – CloudHSM Classic (the original model) supports the generation and use of keys that comply with <a href="">FIPS 140-2 Level 2</a>. We’re stepping that up a notch today with support for <a href="">FIPS 140-2 Level 3</a>, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.</p> <p><strong>AWS-Native</strong> – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the <a href="" title="">AWS Management Console</a>, <a href="" title="">AWS Command Line Interface (CLI)</a>, or API calls.</p> <p><span style="text-decoration: underline"><strong>Diving In</strong></span><br /> You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).</p> <p>All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.</p> <p>At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.</p> <p><span style="text-decoration: underline"><strong>Setting Up a Cluster</strong></span><br /> Let’s set up a cluster using the <a href="">CloudHSM Console</a>:</p> <p><img class="aligncenter size-medium" src="" width="1258" height="322" /></p> <p>I click on <strong>Create cluster</strong> to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):</p> <p><img class="aligncenter size-medium" src="" /></p> <p>Then I review my settings and click on <strong>Create</strong>:</p> <p><img class="aligncenter size-medium" src="" width="900" height="325" /></p> <p>After a few minutes, my cluster exists, but is uninitialized:</p> <p><img class="size-medium aligncenter" src="" width="900" height="211" /></p> <p>Initialization simply means retrieving a certificate signing request (the Cluster CSR):</p> <p><img class="aligncenter size-medium" src="" width="697" height="279" /></p> <p>And then creating a private key and using it to sign the request (these commands were copied from the <a href="">Initialize Cluster</a> docs and I have omitted the output. Note that ID identifies the cluster):</p> <div class="hide-language"> <pre class="unlimited-height-code"><code class="lang-bash">$ openssl genrsa -out CustomerRoot.key 2048 $ openssl req -new -x509 -days 365 -key CustomerRoot.key -out CustomerRoot.crt $ openssl x509 -req -days 365 -in ID_ClusterCsr.csr \ -CA CustomerRoot.crt \ -CAkey CustomerRoot.key \ -CAcreateserial \ -out ID_CustomerHsmCertificate.crt </code></pre> </div> <p>The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).</p> <p>Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt &amp; decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.</p> <p><span style="text-decoration: underline"><strong>Available Today</strong></span><br /> The new HSM is available today in the <span title="">US East (Northern Virginia)</span>, <span title="">US West (Oregon)</span>, <span title="">US East (Ohio)</span>, and <span title="">EU (Ireland)</span> Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.</p> <p>— <a href="">Jeff</a>;</p> New – Amazon Web Services Extends CloudTrail to All AWS Customers Mon, 14 Aug 2017 15:28:36 +0000 d31bd3ac39b94fdceadb403a0f6234628441ccc5 I have exciting news for all Amazon Web Services customers! I have been waiting patiently to share this great news with all of you and finally, the wait is over. is now enabled by default for ALL CUSTOMERS and will provide visibility into the past seven days of account activity without the need for you […] <p>I have exciting news for all Amazon Web Services customers! I have been waiting patiently to share this great news with all of you and finally, the wait is over. <a href="" title="">AWS CloudTrail</a> is now enabled <strong>by default</strong> for <strong>ALL CUSTOMERS</strong> and will provide visibility into the past seven days of account activity without the need for you to configure a trail in the service to get started. This new ‘always on’ capability provides the ability to view, search, and download the aforementioned account activity through the CloudTrail Event History.</p> <p><img class="aligncenter size-full" src="" width="700" height="212" /></p> <p>For those of you that haven’t taken advantage of <strong>AWS CloudTrail</strong> yet, let me explain why I am thrilled to have this essential service for operational troubleshooting and review, compliance, auditing and security, turned by default for all AWS Accounts.</p> <p><strong>AWS CloudTrail</strong> captures account activity and events for supported services made in your AWS account and sends the event log files to <a href="" title="">Amazon Simple Storage Service (S3)</a>, <a href="">CloudWatch Logs</a>, and <a title="undefined" href="" target="null">CloudWatch Events</a>. With CloudTrail, you typically create a trail, a configuration enabling logging of account activity and events. CloudTrail, then, fast tracks your ability to analyze operational and security issues by providing visibility into the API activity happening in your AWS account. CloudTrail supports multi-region configurations and when integrated with CloudWatch you can create triggers for events you want to monitor or create a subscription to send activity to <a href="" title="">AWS Lambda</a>. Taking advantage of the CloudTrail service means that you have a searchable historical record of data of calls made from your account from other AWS services, from the <a href="" title="">AWS Command Line Interface (CLI)</a>, the <a href="" title="">AWS Management Console</a>, and <a href="" title="">AWS SDKs</a>.</p> <p><strong>The key features of AWS CloudTrail are: </strong></p> <ul> <li><strong>Always On:</strong>&nbsp;enabled on all AWS accounts and records your account activity upon account creation&nbsp;without the need to configure CloudTrail</li> <li><strong>Event History:</strong>&nbsp;view, search, and download your recent AWS account activity</li> <li><strong>Management Level Events:</strong>&nbsp;get details&nbsp;administrative actions such as creation, deletion, and modification of EC2 instances or S3 buckets</li> <li><strong>Data Level Events:</strong>&nbsp;record all API actions on Amazon S3 objects and receive detailed information about API actions</li> <li><strong>Log File Integrity Validation:&nbsp;</strong>validate the integrity of log files stored in your S3 bucket</li> <li><strong>Log File Encryption:</strong> service encrypts all log files by default delivered to your S3 bucket using S3 server-side encryption (SSE).&nbsp;Option to encrypt log files with AWS Key Management Service (AWS KMS) as well</li> <li><strong>Multi-region Configuration:&nbsp;</strong>configure service to deliver log files from multiple regions</li> </ul> <p>You can read more about the features of AWS CloudTrail on the <a href="">product detail page</a>.</p> <p>As my colleague, <a href="">Randall Hunt</a>, reminded me: CloudTrail is essential when helping customers to troubleshoot their solutions. What most AWS resources, like those of us on the Technical Evangelist team or the great folks on the Solutions Architect team, will say is <strong><em>“Enable CloudTrail”</em></strong> so we can examine the details of what’s going on. Therefore, it’s no wonder that I am ecstatic to share that with this release, all AWS customers can view account activity by using the AWS console or the <a href="">AWS CLI/API</a>, including the ability to search and download seven days of account activity for operations of all supported services.</p> <p>With CloudTrail being enabled by default, all AWS customers can now log into CloudTrail and review their&nbsp;<strong>Event History.&nbsp;</strong>In this view, not only do you see the last seven days of events, but you can also select an event to view more information about it.</p> <p><img class="aligncenter size-full" src="" width="1000" height="441" /></p> <p><img class="aligncenter size-full" src="" width="1000" height="447" /></p> <p>Of course, if you want to access your CloudTrail log files directly or archive your logs for auditing purposes, you can still create a trail and specify the S3 bucket for your log file delivery. Creating a trail also allows you to deliver events to CloudWatch Logs and CloudWatch Events, and is a very easy process.</p> <p>After logging into the CloudTrail console, you would simply click the&nbsp;<strong>Create a trail </strong>button.</p> <p><img class="aligncenter size-full" src="" width="1000" height="383" /><br /> You then would enter a trail name in the <strong>Trail name</strong> text box and select the radio button for the option of applying your trail configuration to all regions or only for the region you are currently in. For this example, I’ll name my trail, <strong>TEW-USEast-Region-Trail, </strong>and select <strong>No</strong> for the <strong>Apply trail to all regions</strong>, radio button. This means that this trail will only track events and activities in the current region, which right now is <strong><a href="">US-East</a> (N. Virginia)</strong>. &nbsp;<strong>Please note:</strong> A best practice is to select <strong>Yes</strong> to the <strong>Apply trail to all regions</strong>&nbsp;option to ensure that you will capture all events related to your AWS account, including global service events.</p> <p><img class="aligncenter size-full" src="" width="1000" height="168" /><br /> Under <strong>Management events</strong>, I select the <strong>Read/Write events</strong> radio button option for which operations I want CloudTrail to track. In this case, I will select the <strong>All</strong> option.</p> <p><img class="aligncenter size-full" src="" width="1000" height="191" /></p> <p>Next step is for me to select the S3 buckets for which I desire to track the S3 object-level operations. This is an optional step, but note that by default trails do not log Data Events. Therefore, if you want to track the S3 object event activity you can configure your trail to track Data Events for objects in the bucket you specify in the <strong>Data events</strong> section. I’ll select my <strong>aws-blog-tew-posts</strong> S3 bucket<strong>, </strong>and keep the default option to track <strong>all Read/Write</strong> operations.</p> <p><img class="aligncenter size-full" src="" width="1000" height="388" /><br /> My final step in the creation of my trail is to select a S3 bucket in the <strong>Storage Location</strong> section of the console for where I wish to house my CloudTrail logs. I can either have CloudTrail create a new bucket on my behalf or select an existing bucket in my account. I will opt to have CloudTrail create a new bucket for me so I will enter a unique bucket name of <strong>tew-cloudtrail-logbucket</strong> in the text box. I want to make sure that I can find my logs easily so I will expand the <strong>Advanced</strong> section of the Storage Location and add a prefix. This is most helpful when you want to add search criteria to logs being stored in your bucket. For my prefix, I will just enter <strong>tew-2017</strong>. I’ll keep the default selections for the other <strong>Advanced</strong> options shown which include choices for; <strong>Encrypt log files</strong>, <strong>Enable log file validation</strong>, and <strong>Send SNS notification for every log file delivery.</strong></p> <p>That’s it! Once I click the <strong>Create</strong> button, I have successfully created a trail for <strong>AWS CloudTrail</strong>.</p> <p><img class="aligncenter size-full" src="" width="1000" height="527" /></p> <p>&nbsp;</p> <p><strong><u>Ready to get started? </u></strong></p> <p>You can learn more about <strong><a href="">AWS CloudTrail</a></strong> by visiting the service <a href="">product page</a>, the <a href="">CloudTrail documentation</a>, and/or AWS CloudTrail <a href="">frequently asked questions.</a> Head over to the CloudTrail service console to view and search your CloudTrail events, with or without a trail configured.</p> <p>Enjoy the new launch of CloudTrail for All AWS Customers, and all the goodness that you will get from taking advantage of this great service!</p> <p>– <a href="">Tara</a></p> AWS Config Update – New Managed Rules to Secure S3 Buckets Mon, 14 Aug 2017 15:28:16 +0000 196a5a91b755282de3469d44b4c463f2360c2975 captures the state of your AWS resources and the relationships between them. Among other features, it allows you to select a resource and then view a timeline of configuration changes that affect the resource (read Track AWS Resource Relationships With AWS Config to learn more). AWS Config rules extends Config with a powerful rule system, […] <p><a href="" title="">AWS Config</a> captures the state of your AWS resources and the relationships between them. Among other features, it allows you to select a resource and then view a timeline of configuration changes that affect the resource (read <a href="">Track AWS Resource Relationships With AWS Config</a> to learn more).</p> <p><a href="">AWS Config rules</a> extends Config with a powerful rule system, with support for a “<a href="">managed</a>” collection of AWS rules as well as custom rules that you write yourself (my blog post, <a href="">AWS Config Rules – Dynamic Compliance Checking for Cloud Resources</a>, contains more info). The rules (<a href="" title="">AWS Lambda</a> functions) represent the ideal (properly configured and compliant) state of your AWS resources. The appropriate functions are invoked when a configuration change is detected and check to ensure compliance.</p> <p>You already have access to about three dozen managed rules. For example, here are some of the rules that check your EC2 instances and related resources:</p> <p><img class="aligncenter size-medium" src="" width="892" height="717" /></p> <p><span style="text-decoration: underline"><strong>Two New Rules</strong></span><br /> Today we are adding two new managed rules that will help you to secure your S3 buckets. You can enable these rules with a single click. The new rules are:</p> <p><img style="float: right;padding-left: 8px;padding-bottom: 8px;padding-top: 32px" src="" /><strong>s3-bucket-public-write-prohibited</strong>&nbsp;– Automatically identifies buckets that allow global write access. There’s rarely a reason to create this configuration intentionally since it allows<br /> unauthorized users to add malicious content to buckets and to delete (by overwriting) existing content. The rule checks all of the buckets in the account.</p> <p><strong>s3-bucket-public-read-prohibited</strong>&nbsp;– Automatically identifies buckets that allow global read access. This will flag content that is publicly available, including web sites and documentation. This rule also checks all buckets in the account.</p> <p>Like the existing rules, the new rules can be run on a schedule or in response to changes detected by Config. You can see the compliance status of all of your rules at a glance:</p> <p><img class="aligncenter size-medium" src="" width="884" height="296" /></p> <p>Each evaluation runs in a matter of milliseconds; scanning an account with 100 buckets will take less than a minute. Behind the scenes, the rules are evaluated by a reasoning engine that uses some leading-edge constraint solving techniques that can, in many cases, address NP-complete problems in polynomial time (we did not resolve <a href="">P versus NP</a>; that would be far bigger news). This work is part of a larger effort within AWS, some of which is described in a <a href="" title="">AWS re:Invent</a> presentation: <a href="">Automated Formal Reasoning About AWS Systems</a>:</p> <p><a href=""><img class="aligncenter size-medium" src="" width="680" height="386" /></a></p> <p><span style="text-decoration: underline"><strong>Now Available</strong></span><br /> The new rules are available now and you can start using them today. Like the other rules, they are priced at $2 per rule per month.</p> <p>— <a href="">Jeff</a>;</p> AWS Migration Hub – Plan & Track Enterprise Application Migration Mon, 14 Aug 2017 14:34:56 +0000 6b91eab465bb3f41e97923c4f8ec4324a2861e00 About once a week, I speak to current and potential AWS customers in our Seattle Executive Briefing Center. While I generally focus on our innovation process, we sometimes discuss other topics, including application migration. When enterprises decide to migrate their application portfolios they want to do it in a structured, orderly fashion. These portfolios typically […] <p>About once a week, I speak to current and potential AWS customers in our Seattle Executive Briefing Center. While I generally focus on our innovation process, we sometimes discuss other topics, including application migration. When enterprises decide to migrate their application portfolios they want to do it in a structured, orderly fashion. These portfolios typically consist of hundreds of complex Windows and Linux applications, relational databases, and more. Customers find themselves eager yet uncertain as to how to proceed. After spending time working with these customers, we have learned that their challenges generally fall in to three major categories:</p> <p><strong>Discovery</strong> – They want to make sure that they have a deep and complete understanding of all of the moving parts that power each application.</p> <p><strong>Server &amp; Database Migration</strong> – They need to transfer on-premises workloads and database tables to the cloud.</p> <p><strong>Tracking / Management</strong> – With large application portfolios and multiple migrations happening in parallel, they need to track and manage progress in an application-centric fashion.</p> <p>Over the last couple of years we have launched a set of tools that address the first two challenges. The <a href="">AWS Application Discovery Service</a> automates the process of discovering and collecting system information, the <a href="">AWS Server Migration Service</a> takes care of moving workloads to the cloud, and the <a href="">AWS Database Migration Service</a> moves relational databases, NoSQL databases, and data warehouses with minimal downtime. Partners like <a href="">Racemi</a> and <a href="">CloudEndure</a> also offer migration tools of their own.</p> <p><span style="text-decoration: underline"><strong>New AWS Migration Hub</strong></span><br /> Today we are bringing this collection of AWS and partner migration tools together in the <a href="" title="">AWS Migration Hub</a>. The hub provides access to the tools that I mentioned above, guides you through the migration process, and tracks the status of each migration, all in accord with the methodology and tenets described in our <a href="">Migration Acceleration Program</a> (MAP).</p> <p>Here’s the main screen. It outlines the migration process (discovery, migration, and tracking):</p> <p><img class="size-medium aligncenter" src="" width="892" height="671" /></p> <p>Clicking on <strong>Start discovery</strong> reveals the flow of the migration process:</p> <p><img class="aligncenter size-medium" src="" width="904" height="682" /></p> <p>It is also possible to skip the Discovery step and begin the migration immediately:</p> <p><img class="aligncenter size-medium" src="" width="892" height="670" /></p> <p>The Servers list is populated using data from an AWS migration service (<a href="">Server Migration Service</a> or <a href="">Database Migration Service</a>), partner tools, or using data collected by the <a href="">AWS Application Discovery Service</a>:</p> <p><img class="aligncenter size-medium" src="" />I can on <strong>Group as application</strong> to create my first application:</p> <p><img class="aligncenter size-medium" src="" /></p> <p>Once I identify some applications to migrate, I can track them in the <strong>Migrations</strong> section of the Hub:</p> <p><img class="aligncenter size-medium" src="" width="892" height="674" /></p> <p>The migration tools, if authorized, automatically send status updates and results back to Migration Hub, for display on the migration status page for the application. Here you can see that <a href="">Racemi DynaCenter</a> and <a href="">CloudEndure Migration</a> have played their parts in the migration:</p> <p><img class="aligncenter size-medium" src="" width="892" height="667" /></p> <p>I can track the status of my migrations by checking the Migration Hub Dashboard:</p> <p><img class="aligncenter size-medium" src="" /></p> <p>Migration Hub works with migration tools from AWS and our Migration Partners; see the list of <a href="">integrated partner tools</a> to learn more:</p> <p><a href=""><img class="aligncenter size-medium" src="" /></a><span style="text-decoration: underline"><strong>Available Now</strong></span><br /> <a href="" title="">AWS Migration Hub</a> can manage migrations in any AWS Region that has the necessary migration tools available; the hub itself runs in the <span title="">US West (Oregon)</span> Region. There is no charge for the Hub; you pay only for the AWS services that you consume in the course of the migration.</p> <p>If you are ready to begin your migration to the cloud and are in need of some assistance, please take advantage of the services offered by our Migration Acceleration Partners. These organizations have earned their <a href="">migration competency</a> by repeatedly demonstrating their ability to deliver large-scale migration.</p> <p>— <a href="">Jeff</a>;</p>