AWS Blog

Amazon WorkDocs Update – Commenting & Reviewing Enhancements and a New Activity Feed

by Jeff Barr | on | in Amazon WorkDocs | | Comments

As I have told you in the past, we like to drink our own Champagne at Amazon. Practically speaking, this means that we make use of our own services, tools, and applications as part of our jobs, and that we supply the development teams with feedback if we have an idea for an improvement or if we find something that does not work as expected.

I first talked about Amazon WorkDocs (which was originally called Zocalo) back in the middle of 2014, and have been using it ever since (at busy times I often have drafts of 7 or 8 posts circulating).

I upload drafts of every new blog post (usually as PDFs) to WorkDocs and then share them with the Product Manager, Product Marketing Manager, and other designated reviewers. The reviewers leave feedback for me, I update the draft, and I wait for more feedback. After a couple of iterations the draft settles down and I wait for the go-ahead to publish the post. The circle of reviews often grows to include developers, senior management, and so forth. I simply share the document with them and look forward to even more feedback. My job is to read and to process all of the feedback (lots of suggestions, and the occasional question) as quickly as possible and to make sure that I did not miss anything.

Today I would like to tell you about some recent recent enhancements that makes WorkDocs even more useful. We have added some more commenting and reviewing features, along with an activity feed.

Enhanced Commenting
Over the course of a couple of revisions, some comments will spur a discussion. There might be a question about the applicability of a particular feature or the value of a particular image. In order to make it easier to start and to continue conversations, WorkDocs now supports threaded replies. I simply click on Reply and respond to a comment:

It is displayed like this:

If I click on Private, the comment is accessible only to the person who wrote the original.

In order to strengthen my message, I can also use simple formatting (bold, italic, and strikethrough) in my comments. Here’s how I specify each one:

And here’s the result:

Clicking on the ? displays a handy guide to formatting:

When the time for comments has passed, I can now disable feedback with a single click:

To learn more about these features, read Giving Feedback in the WorkDocs User Guide.

Enhanced Reviewing
As the comments accumulate, I sometimes need to draw a reviewer’s attention to a particular comment. I can do this by entering an @ in the comment and then choosing their name from the popup menu:

The user will be notified by email in order to let them know that their feedback is needed.

From time to time, a potential reviewer will come in to possession of a URL to a WorkDocs document but will not have access to the document. They can now request access to the document like this:

The request will be routed to the owner of the document via email for approval.

Similarly, someone who has been granted Viewer-level access can now request Contributor-level access:

Again, the request will be routed to the owner of the document via email for approval:

 

Activity Feed
With multiple blog posts out for review at any given time, keeping track of what’s coming and going can be challenging. In order to give me a big-picture view, WorkDocs now includes an Activity Feed. The feed shows me what is going on with my own documents and with those that have been shared with me. I can watch as files and folders are created, changed, removed, and commented on. I can also see who is making the changes and track the times when they were made:

I can enter a search term to control what I see in the feed:

And I can further filter the updates by activity type or by date:

Available Now
These features are available now and you can start using them today.

Jeff;

 

Amazon Elasticsearch Service support for Elasticsearch 5.1

by Tara Walker | on | in Amazon Elasticsearch Service, Launch | | Comments

The Amazon Elasticsearch Service is a fully managed service that provides easier deployment, operation, and scale for the Elasticsearch open-source search and analytics engine. We are excited to announce that Amazon Elasticsearch Service now supports Elasticsearch 5.1 and Kibana 5.1.

Elasticsearch 5 comes with a ton of new features and enhancements that customers can now take advantage of in Amazon Elasticsearch service. Elements of the Elasticsearch 5 release are as follow:

  • Indexing performance: Improved Indexing throughput with updates to lock implementation & async translog fsyncing
  • Ingestion Pipelines: Incoming data can be sent to a pipeline that applies a series of ingestion processors, allowing transformation to the exact data you want to have in your search index. There are twenty processors included, from simple appending to complex regex applications
  • Painless scripting: Amazon Elasticsearch Service supports Painless, a new secure and performant scripting language for Elasticsearch 5. You can use scripting to change the precedence of search results, delete index fields by query, modify search results to return specific fields, and more.
  • New data structures: Lucene 6 data structures, new data types; half_float, text, keyword, and more complete support for dots-in-fieldnames
  • Search and Aggregations: Refactored search API, BM25 relevance calculations, Instant Aggregations, improvements to histogram aggregations & terms aggregations, and rewritten percolator & completion suggester
  • User experience: Strict settings and body & query string parameter validation, index management improvement, default deprecation logging, new shard allocation API, and new indices efficiency pattern for rollover & shrink APIs
  • Java REST client: simple HTTP/REST Java client that works with Java 7 and handles retry on node failure, as well as, round-robin, sniffing, and logging of requests
  • Other improvements: Lazy unicast hosts DNS lookup, automatic parallel tasking of reindex, update-by-query, delete-by-query, and search cancellation by task management API

The compelling new enhancements of Elasticsearch 5 are meant to make the service faster and easier to use while providing better security. Amazon Elasticsearch Service is a managed service designed to aid customers in building, developing and deploying solutions with Elasticsearch by providing the following capabilities:

  • Multiple configurations of instance types
  • Amazon EBS volumes for data storage
  • Cluster stability improvement with dedicated master nodes
  • Zone awareness – Cluster node allocation across two Availability Zones in the region
  • Access Control & Security with AWS Identity and Access Management (IAM)
  • Various geographical locations/regions for resources
  • Amazon Elasticsearch domain snapshots for replication, backup and restore
  • Integration with Amazon CloudWatch for monitoring Amazon Elasticsearch domain metrics
  • Integration with AWS CloudTrail for configuration auditing
  • Integration with other AWS Services like Kinesis Firehose and DynamoDB for loading of real-time streaming data into Amazon Elasticsearch Service

Amazon Elasticsearch Service allows dynamic changes with zero downtime. You can add instances, remove instances, change instance sizes, change storage configuration, and make other changes dynamically.

The best way to highlight some of the aforementioned capabilities is with an example.

During a presentation at the IT/Dev conference, I demonstrated how to build a serverless employee onboarding system using Express.js, AWS Lambda, Amazon DynamoDB, and Amazon S3. In the demo, the information collected was personnel data stored in DynamoDB about an employee going through a fictional onboarding process. Imagine if the collected employee data could be searched, queried, and analyzed as needed by the company’s HR department. We can easily augment the onboarding system to add these capabilities by enabling the employee table to use DynamoDB Streams to trigger Lambda and store the desired employee attributes in Amazon Elasticsearch Service.

The result is the following solution architecture:

We will focus solely on how to dynamically store and index employee data to Amazon Elasticseach Service each time an employee record is entered and subsequently stored in the database.
To add this enhancement to the existing aforementioned onboarding solution, we will implement the solution as noted by the detailed cloud architecture diagram below:

Let’s look at how to implement the employee load process to the Amazon Elasticsearch Service, which is the first process flow shown in the diagram above.

Amazon Elasticsearch Service: Domain Creation

Let’s now visit the AWS Console to check out Amazon Elasticsearch Service with Elasticsearch 5 in action. As you probably guessed, from the AWS Console home, we select Elasticsearch Service under the Analytics group.

The first step in creating an Elasticsearch solution is to create a domain.  You will notice that now when creating an Amazon Elasticsearch Service domain, you now have the option to choose the Elasticsearch 5.1 version.  Since we are discussing the launch of the support of Elasticsearch 5, we will, of course, choose the 5.1 Elasticsearch engine version when creating our domain in the Amazon Elasticsearch Service.


After clicking Next, we will now setup our Elasticsearch domain by configuring our instance and storage settings. The instance type and the number of instances for your cluster should be determined based upon your application’s availability, network volume, and data needs. A recommended best practice is to choose two or more instances in order to avoid possible data inconsistencies or split brain failure conditions with Elasticsearch. Therefore, I will choose two instances/data nodes for my cluster and set up EBS as my storage device.

To understand how many instances you will need for your specific application, please review the blog post, Get Started with Amazon Elasticsearch Service: How Many Data Instances Do I Need, on the AWS Database blog.

All that is left for me is to set up the access policy and deploy the service. Once I create my service, the domain will be initialized and deployed.

Now that I have my Elasticsearch service running, I now need a mechanism to populate it with data. I will implement a dynamic data load process of the employee data to Amazon Elasticsearch Service using DynamoDB Streams.

Amazon DynamoDB: Table and Streams

Before I head to the DynamoDB console, I will quickly cover the basics.

Amazon DynamoDB is a scalable, distributed NoSQL database service. DynamoDB Streams provide an ordered, time-based sequence of every CRUD operation to the items in a DynamoDB table. Each stream record has information about the primary attribute modification for an individual item in the table. Streams execute asynchronously and can write stream records in practically real time. Additionally, a stream can be enabled when a table is created or can be enabled and modified on an existing table. You can learn more about DynamoDB Streams in the DynamoDB developer guide.

Now we will head to the DynamoDB console and view the OnboardingEmployeeData table.

This table has a primary partition key, UserID, that is a string data type and a primary sort key, Username, which is also of a string data type. We will use the UserID as the document ID in Elasticsearch. You will also notice that on this table, streams are enabled and the stream view type is New image. A stream that is set to a New image view type will have stream records that display the entire item record after it has been updated. You also have the option to have the stream present records that provide data items before modification, provide only the items’ key attributes, or provide old and new item information.  If you opt to use the AWS CLI to create your DynamoDB table, the key information to capture is the Latest Stream ARN shown underneath the Stream Details section. A DynamoDB stream has a unique ARN identifier that is outside of the ARN of the DynamoDB table. The stream ARN will be needed to create the IAM policy for access permissions between the stream and the Lambda function.

IAM Policy

The first thing that is essential for any service implementation is getting the correct permissions in place. Therefore, I will first go to the IAM console to create a role and a policy for my Lambda function that will provide permissions for DynamoDB and Elasticsearch.

First, I will create a policy based upon an existing managed policy for Lambda execution with DynamoDB Streams.

This will take us to the Review Policy screen, which will have the selected managed policy details. I’ll name this policy, Onboarding-LambdaDynamoDB-toElasticsearch, and then customize the policy for my solution. The first thing you should notice is that the current policy allows access to all streams, however, the best practice would be to have this policy only access the specific DynamoDB Stream by adding the Latest Stream ARN. Hence, I will alter the policy and add the ARN for the DynamoDB table, OnboardingEmployeeData, and validate the policy. The altered policy is as shown below.

The only thing left is to add the Amazon Elasticsearch Service permissions in the policy. The core policy for Amazon Elasticsearch Service access permissions is as shown below:

 

I will use this policy and add the specific Elasticsearch domain ARN as the Resource for the policy. This ensures that I have a policy that enforces the Least Privilege security best practice for policies. With the Amazon Elasticsearch Service domain added as shown, I can validate and save the policy.

The best way to create a custom policy is to use the IAM Policy Simulator or view the examples of the AWS service permissions from the service documentation. You can also find some examples of policies for a subset of AWS Services here. Remember you should only add the ES permissions that are needed using the Least Privilege security best practice, the policy shown above is used only as an example.

We will create the role for our Lambda function to use to grant access and attach the aforementioned policy to the role.

AWS Lambda: DynamoDB triggered Lambda function

AWS Lambda is the core of Amazon Web Services serverless computing offering. With Lambda, you can write and run code using supported languages for almost any type of application or backend service. Lambda will trigger your code in response to events from AWS services or from HTTP requests. Lambda will dynamically scale based upon workload and you only pay for your code execution.

We will have DynamoDB streams trigger a Lambda function that will create an index and send data to Elasticsearch. Another option for this is to use the Logstash plugin for DynamoDB. However, since several of the Logstash processors are now included in Elasticsearch 5.1 core and with the improved performance optimizations, I will opt to use Lambda to process my DynamoDB stream and load data to Amazon Elasticsearch Service.
Now let us head over to the AWS Lambda console and create the lambda function for loading employee data to Amazon Elasticsearch Service.

Once in the console, I will create a new Lambda function by selecting the Blank Function blueprint that will take me to the Configure Trigger page. Once on the trigger page, I will select DynamoDB as the AWS service which will trigger Lambda, and I provide the following trigger related options:

  • Table: OnboardingEmployeeData
  • Batch size: 100 (default)
  • Starting position: Trim Horizon

I hit Next button, and I am on the Configure Function screen. The name of my function will be ESEmployeeLoad and I will write this function in Node.4.3.

The Lambda function code is as follows:

var AWS = require('aws-sdk');
var path = require('path');

//Object for all the ElasticSearch Domain Info
var esDomain = {
    region: process.env.RegionForES,
    endpoint: process.env.EndpointForES,
    index: process.env.IndexForES,
    doctype: 'onboardingrecords'
};
//AWS Endpoint from created ES Domain Endpoint
var endpoint = new AWS.Endpoint(esDomain.endpoint);
//The AWS credentials are picked up from the environment.
var creds = new AWS.EnvironmentCredentials('AWS');

console.log('Loading function');
exports.handler = (event, context, callback) => {
    //console.log('Received event:', JSON.stringify(event, null, 2));
    console.log(JSON.stringify(esDomain));
    
    event.Records.forEach((record) => {
        console.log(record.eventID);
        console.log(record.eventName);
        console.log('DynamoDB Record: %j', record.dynamodb);
       
        var dbRecord = JSON.stringify(record.dynamodb);
        postToES(dbRecord, context, callback);
    });
};

function postToES(doc, context, lambdaCallback) {
    var req = new AWS.HttpRequest(endpoint);

    req.method = 'POST';
    req.path = path.join('/', esDomain.index, esDomain.doctype);
    req.region = esDomain.region;
    req.headers['presigned-expires'] = false;
    req.headers['Host'] = endpoint.host;
    req.body = doc;

    var signer = new AWS.Signers.V4(req , 'es');  // es: service code
    signer.addAuthorization(creds, new Date());

    var send = new AWS.NodeHttpClient();
    send.handleRequest(req, null, function(httpResp) {
        var respBody = '';
        httpResp.on('data', function (chunk) {
            respBody += chunk;
        });
        httpResp.on('end', function (chunk) {
            console.log('Response: ' + respBody);
            lambdaCallback(null,'Lambda added document ' + doc);
        });
    }, function(err) {
        console.log('Error: ' + err);
        lambdaCallback('Lambda failed with error ' + err);
    });
}

The Lambda function Environment variables are:

I will select an Existing role option and choose the ESOnboardingSystem IAM role I created earlier.

Upon completing my IAM role permissions for the Lambda function, I can review the Lambda function details and complete the creation of ESEmployeeLoad function.

I have completed the process of building my Lambda function to talk to Elasticsearch, and now I test my function my simulating data changes to my database.

Now my function, ESEmployeeLoad, will execute upon changes to the data in my database from my onboarding system. Additionally, I can review the processing of the Lambda function to Elasticsearch by reviewing the CloudWatch logs.

Now I can alter my Lambda function to take advantage of the new features or go directly to Elasticsearch and utilize the new Ingest Mode. An example of this would be to implement a pipeline for my Employee record documents.

I can replicate this function for handling the badge updates to the employee record, and/or leverage other preprocessors against the employee data. For instance, if I wanted to do a search of data based upon a data parameter in the Elasticsearch document, I could use the Search API and get records from the dataset.

The possibilities are endless, and you can get as creative as your data needs dictate while maintaining great performance.

Amazon Elasticsearch Service: Kibana 5.1

All Amazon Elasticsearch Service domains using Elasticsearch 5.1 are bundled with Kibana 5.1, the latest version of the open-source visualization tool.

The companion visualization and analytics platform, Kibana, has also been enhanced in the Kibana 5.1 release. Kibana is used to view, search or and interact with Elasticsearch data with a myriad of different charts, tables, and maps.  In addition, Kibana performs advanced data analysis of large volumes of the data. Key enhancements of the Kibana release are as follows:

  • Visualization tool new design: Updated color scheme and maximization of screen real-estate
  • Timelion: visualization tool with a time-based query DSL
  • Console: formerly known as Sense is now part of the core, using the same configuration for free-form requests to Elasticsearch
  • Scripted field language: ability use new Painless scripting language in the Elasticsearch cluster
  • Tag Cloud Visualization: 5.1 adds a word base graphical view of data sized by importance
  • More Charts: return of previously removed charts and addition of advanced view for X-Pack
  • Profiler UI:1 provides an enhancement to profile API with tree view
  • Rendering performance improvement: Discover performance fixes, decrease of CPU load

Summary

As you can see this release is expansive with many enhancements to assist customers in building Elasticsearch solutions. Amazon Elasticsearch Service now supports 15 new Elasticsearch APIs and 6 new plugins. Amazon Elasticsearch Service supports the following operations for Elasticsearch 5.1:

You can read more about the supported operations for Elasticsearch in the Amazon Elasticsearch Developer Guide, and you can get started by visiting the Amazon Elasticsearch Service website and/or sign into the AWS Management Console.

Tara

 

Streamline AMI Maintenance and Patching Using Amazon EC2 Systems Manager | Automation

by Ana Visneski | on | | Comments

Here to tell you about using Automation to streamline AMI maintenance and patching is Taylor Anderson, a Senior Product Manager with EC2.

-Ana


 

Last December at re:Invent, we launched Amazon EC2 Systems Manager, which helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. These capabilities enable automated configuration and ongoing management of systems at scale, and help maintain software compliance for instances running in Amazon EC2 or on-premises.

One feature within Systems Manager is Automation, which can be used to patch, update agents, or bake applications into an Amazon Machine Image (AMI). With Automation, you can avoid the time and effort associated with manual image updates, and instead build AMIs through a streamlined, repeatable, and auditable process.

Recently, we released the first public document for Automation: AWS-UpdateLinuxAmi. This document allows you to automate patching of Ubuntu, CentOS, RHEL, and Amazon Linux AMIs, as well as automating the installation of additional site-specific packages and configurations.

More importantly, it makes it easy to get started with Automation, eliminating the need to first write an Automation document. AWS-UpdateLinuxAmi can also be used as a template when building your own Automation workflow. Windows users can expect the equivalent document―AWS-UpdateWindowsAmi―in the coming weeks.

AWS-UpdateLinuxAmi automates the following workflow:

  1. Launch a temporary EC2 instance from a source Linux AMI.
  2. Update the instance.
    • Invoke a user-provided, pre-update hook script on the instance (optional).
    • Update any AWS tools and agents on the instance, if present.
    • Update the instance’s distribution packages using the native package manager.
    • Invoke a user-provided post-update hook script on the instance (optional).
  3. Stop the temporary instance.
  4. Create a new AMI from the stopped instance.
  5. Terminate the instance.

Warning: Creation of an AMI from a running instance carries a risk that credentials, secrets, or other confidential information from that instance may be recorded to the new image. Use caution when managing AMIs created by this process.

Configuring roles and permissions for Automation

If you haven’t used Automation before, you need to configure IAM roles and permissions. This includes creating a service role for Automation, assigning a passrole to authorize a user to provide the service role, and creating an instance role to enable instance management under Systems Manager. For more details, see Configuring Access to Automation.

Executing Automation

      1. In the EC2 console, choose Systems Manager Services, Automations.
      2. Choose Run automation document
      3. Expand Document name and choose AWS-UpdateLinuxAmi.
      4. Choose the latest document version.
      5.  For SourceAmiId, enter the ID of the Linux AMI to update.
      6. For InstanceIamRole, enter the name of the instance role you created enabling Systems Manager to manage an instance (that is, it includes the AmazonEC2RoleforSSM managed policy). For more details, see Configuring Access to Automation.
      7.  For AutomationAssumeRole, enter the ARN of the service role you created for Automation. For more details, see Configuring Access to Automation.
      8.  Choose Run Automation.
      9. Monitor progress in the Automation Steps tab, and view step-level outputs.

After execution is complete, choose Description to view any outputs returned by the workflow. In this example, AWS-UpdateLinuxAmi returns the new AMI ID.

Next, choose Images, AMIs to view your new AMI.

There is no additional charge to use the Automation service, and any resources created by a workflow incur nominal charges. Note that if you terminate AWS-UpdateLinuxAmi before reaching the “Terminate Instance” step, shut down the temporary instance created by the workflow.

A CLI walkthrough of the above steps can be found at Automation CLI Walkthrough: Patch a Linux AMI.

Conclusion

Now that you’ve successfully run AWS-UpdateLinuxAmi, you may want to create default values for the service and instance roles. You can customize your workflow by creating your own Automation document based on AWS-UpdateLinuxAmi. For more details, see Create an Automaton Document. After you’ve created your document, you can write additional steps and add them to the workflow.

Example steps include:

      • Updating an Auto Scaling group with the new AMI ID (aws:invokeLambdaFunction action type)
      • Creating an encrypted copy of your new AMI (aws:encrypedCopy action type)
      • Validating your new AMI using Run Command with the RunPowerShellScript document (aws:runCommand action type)

Automation also makes a great addition to a CI/CD pipeline for application bake-in, and can be invoked as a CLI build step in Jenkins. For details on these examples, be sure to check out the Automation technical documentation. For updates on Automation, Amazon EC2 Systems Manager, Amazon CloudFormation, AWS Config, AWS OpsWorks and other management services, be sure to follow the all-new Management Tools blog.

 

New – Instance Size Flexibility for EC2 Reserved Instances

by Jeff Barr | on | in Amazon EC2 | | Comments

Reserved Instances allow AWS customers to receive a significant discount on their EC2 usage (up to 75% when compared to On-Demand pricing), along with capacity reservation when the RIs are purchased for use in a specific Availability Zone (AZ).

Late last year we made Reserved Instances more flexible with the launch of Regional RIs that apply the discount to any AZ in the Region, along with Convertible RIs that allow you to change the instance family and other parameters associated with a Reserved Instance. Both types of RIs reduce your management overhead and provide you with additional options. When you use Regional RIs you can launch an instance without having to worry about launching in the AZ that is eligible for the RI discount. When you use Convertible RIs you can ensure that your RIs remain well-fitted to your usage, even as your choice of instance types and sizes varies over time.

Instance Size Flexibility
Effective March 1, your existing Regional RIs are even more flexible! All Regional Linux/UNIX RIs with shared tenancy now apply to all sizes of instances within an instance family and AWS region, even if you are using them across multiple accounts via Consolidated Billing. This will further reduce the time that you spend managing your RIs and will let you be even more creative and innovative with your use of compute resources.

All new and existing RIs are sized according to a normalization factor that is based on the instance size:

Instance Size
Normalization Factor
nano 0.25
micro 0.5
small 1
medium 2
large 4
xlarge 8
2xlarge 16
4xlarge 32
8xlarge 64
10xlarge 80
16xlarge 128
32xlarge 256

Let’s say you already own an RI for a c4.8xlarge. This RI now applies to any usage of a Linux/UNIX C4 instance with shared tenancy in the region. This could be:

  • One c4.8xlarge instance.
  • Two c4.4xlarge instances.
  • Four c4.2xlarge instances.
  • Sixteen c4.large instances.

It also includes other combinations such as one c4.4xlarge and eight c4.large instances.

If you own an RI that is smaller than the instance that you are running, you will be charged the pro-rated, On-Demand price for the excess. This means that you could buy an RI for a c4.4xlarge, use that instance most of the time, but scale up to a c4.8xlarge instance on occasion. We’ll do the math and you’ll pay only half of the On-Demand, per-hour price for the larger instance (as always, our goal is to give you access to compute power at the lowest possible cost). If you own an RI for a large instance and run a smaller instance, the RI price will apply to the smaller instance. However, the unused portion of the reservation will not accumulate.

Now Available
This new-found flexibility is available now and will be applied automatically to your Regional Linux/UNIX RIs with shared tenancy, without any effort on your part.

Jeff;

AWS Monthly Online Tech Talks – March, 2017

by Tara Walker | on | in Events, Webinars | | Comments

Unbelievably it is March already, as you enter into the madness of March don’t forget to take some time and learning more about the latest service innovations from AWS. Each month, we have a series of webinars targeting best practices and new service features in the AWS Cloud.

I have shared below the schedule for the live, online technical sessions scheduled for the month of March. Remember these talks are free, but they fill up quickly so register ahead of time. The online tech talks scheduled times are shown in Pacific Time (PT) time zone.

Webinars featured this month are as follows:

Tuesday, March 21

Big Data

9:00 AM – 10:00 AM: Deploying a Data Lake in AWS

Databases

10:30 AM – 11:30 AM: Optimizing the Data Tier for Serverless Web Applications

IoT

12:00 Noon – 1:00 PM: One Click Enterprise IoT Services

 

Wednesday, March 22

Databases

10:30 – 11:30 AM: ElastiCache Deep Dive: Best Practices and Usage Patterns

AI

12:00 Noon – 1:00 PM: A Deeper Dive into Apache MXNet on AWS

 

Thursday, March 23

IoT

9:00 – 10:00 AM: Developing Applications with the IoT Button

Compute

10:30 – 11:30 AM: Automating Management of Amazon EC2 Instances with Auto Scaling

 

Friday, March 24

Compute

10:30 – 11:30 AM: An Overview of Designing Microservices Based Applications on AWS

 

Monday, March 27

AI

9:00 – 10:00 AM: How to get the most out of Amazon Polly, a text-to-speech service

 

Tuesday, March 28

Compute

10:30 AM – 11:30 AM: Getting the Most Out of the New Amazon EC2 Reserved Instances Enhancements

Getting Started

12:00 Noon – 1:30 PM: Getting Started with AWS

 

Wednesday, March 29

Security

9:00 – 10:00 AM: Best Practices for Managing Security Operations in AWS

Storage

10:30 – 11:30 AM: Deep Dive on Amazon S3

Big Data

12:00 Noon – 1:00 PM: Log Analytics with Amazon Elasticsearch Service and Amazon Kinesis

 

Thursday, March 30

Storage

9:00 – 10:00 AM: Active Archiving with Amazon S3 and Tiering to Amazon Glacier

Mobile

10:30 AM – 11:30 AM: Deep Dive on Amazon Cognito

Compute

12:00 Noon – 1:00 PM: Building a Development Workflow for Serverless Applications

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These technical sessions are led by AWS solutions architects and engineers and feature live demonstrations & customer examples. You can also check out the AWS on-demand webinar series on the AWS YouTube channel.

Tara

 

AWS Week in Review – March 6, 2017

by Jeff Barr | on | in Week in Review | | Comments

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!

Monday

March 6

Tuesday

March 7

Wednesday

March 8

Thursday

March 9

Friday

March 10

Saturday

March 11

Sunday

March 12

Jeff;

 

AWS and Healthcare: View From the Floor of HIMSS17

by Ana Visneski | on | in AWS Health, AWS Marketplace, Events | | Comments

Jordin Green is our guest writer today, with an inside view from the floor of HIMSS17.

-Ana


HIMMS1Empathy. It’s not always a word you hear associated with technology but one might argue it should be a central tenet for the application of technology in healthcare, and I was reminded of that fact as I wandered the halls representing AWS at HIMSS17, the Healthcare Information and Management System Society annual meeting.

At Amazon, we’re taught to obsess over our customers, but this obsession takes on a new level of responsibility when those customers are directly working on improving the lives of patients and the overall wellness of society. Thinking about the challenges that healthcare professionals are dealing with every day drives home how important it is for AWS to ensure that using the cloud in healthcare is as frictionless as possible. So with that in mind I wanted to share some of the things I saw in and around HIMSS17 regarding healthcare and AWS.

I started my week at the HIMSS Cloud Computing Forum, which was a new full-day HIMSS pre-day focused on educating the healthcare community on cloud. I was particularly struck by the breadth of cloud use cases being explored throughout the industry, even compared to a few years ago. The program featured presentations on cloud-based care coordination, precision medicine, and security. Perhaps one of the most interesting presentations came from Jessica Kahn from the Center for Medicare & Medicaid Services (CMS), talking about the analytics platform that CMS has built in the cloud along with Nuna. Jessica talked about how a cloud-based platform allows CMS to make decisions based on data that is a month old, rather than a year or years old. Additionally, using an automated, rules-based approach, policymakers can directly query the data without having to rely on developers, bringing agility. Nuna has already gained a number of insights from hosting this de-identified Medicaid data for policy research, and is now looking to expand its services to private insurance.

The exhibition opened on Monday, and I was really excited to talk to customers about the new Healthcare & Life Sciences category in the AWS Marketplace. AWS Marketplace is a managed and curated software catalog that helps customers innovate faster and reduce costs, by making it easy to discover, evaluate, procure, immediately deploy and manage 3rd party software solutions. When a customer purchases software via the Marketplace, all of the infrastructure needed to run on AWS is deployed automatically, using the same pay-as-you-go pricing model that AWS uses. Creation of a dedicated category of healthcare is a huge step forward in making it easier for our customers to deploy cloud-based solutions. Our new category features telehealth solutions, products for managing HIPAA compliance, and products that can be used for revenue cycle management from AWS Partner Network (APN) such as PokitDok. We’re just getting started with this category; look for new additions throughout the year.

Later in the week, I tried to spend time at the number of APN Partners exhibiting at HIMSS this year, and it’s safe to say our ecosystem also had lots of moments to shine. Orion Health announced that they will migrate their Amadeus precision medicine platform to the AWS Cloud. Orion has been deploying on top of AWS for a while now, including notably the California-wide Health Information Exchange CalINDEX. Amadeus currently manages 110 million patient records; the migration will represent a significant volume of clinical data running on AWS. New APN Partner Merck announced a new Alexa skill challenge, asking developers to come up with new, innovative ways to use Alexa in the management of chronic disease. Healthcare Competency Partner ClearDATA announced its new fully-managed Containers-as-a-Service product, which simplifies development of healthcare applications by providing developers with a HIPAA-compliant environment for building, testing, and deployment.

This is only a small sample of the activity going on at HIMSS this year, and it’s impossible to capture everything in one post. You learn more about healthcare on AWS on our Cloud Computing in Healthcare page. Nonetheless, after spending four days diving in to healthcare IT, it was great to see how AWS is enabling our healthcare customers and partners deliver solutions that are impacting millions of lives across the globe.

Jordin Green

AWS Database Migration Service – 20,000 Migrations and Counting

by Jeff Barr | on | in Amazon RDS, Amazon Redshift, AWS Database Migration Service | | Comments

I first wrote about AWS Database Migration Service just about a year ago in my AWS Database Migration Service post. At that time I noted that over 1,000 AWS customers had already made use of the service as part of their move to AWS.

As a quick recap, AWS Database Migration Service and Schema Conversion Tool (SCT) help our customers migrate their relational data from expensive, proprietary databases and data warehouses (either on premises on in the cloud, but with restrictive licensing terms either way) to more cost-effective cloud-based databases and data warehouses such as Amazon Aurora, Amazon Redshift, MySQL, MariaDB, and PostgreSQL, with minimal downtime along the way. Our customers tell us that they love the flexibility and the cost-effective nature of these moves. For example, moving to Amazon Aurora gives them access to a database that is MySQL and PostgreSQL compatible, at 1/10th the cost of a commercial database. Take a peek at our AWS Database Migration Services Customer Testimonials to see how Expedia, Thomas Publishing, Pega, and Veoci have made use of the service.

20,000 Unique Migrations
I’m pleased to be able to announce that our customers have already used AWS Database Migration Service to migrate 20,000 unique databases to AWS and that the pace continues to accelerate (we reached 10,000 migrations in September of 2016).

We’ve added many new features to DMS and SCT over the past year. Here’s a summary:

Learn More
Here are some resources that will help you to learn more and to get your own migrations underway, starting with some recent webinars:

Migrating From Sharded to Scale-Up – Some of our customers implemented a scale-out strategy in order to deal with their relational workload, sharding their database across multiple independent pieces, each running on a separate host. As part of their migration, these customers often consolidate two or more shards onto a single Aurora instance, reducing complexity, increasing reliability, and saving money along the way. If you’d like to do this, check out the blog post, webinar recording, and presentation.

Migrating From Oracle or SQL Server to Aurora – Other customers migrate from commercial databases such as Oracle or SQL Server to Aurora. If you would like to do this, check out this presentation and the accompanying webinar recording.

We also have plenty of helpful blog posts on the AWS Database Blog:

  • Reduce Resource Consumption by Consolidating Your Sharded System into Aurora – “You might, in fact, save bunches of money by consolidating your sharded system into a single Aurora instance or fewer shards running on Aurora. That is exactly what this blog post is all about.”
  • How to Migrate Your Oracle Database to Amazon Aurora – “This blog post gives you a quick overview of how you can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to facilitate and simplify migrating your commercial database to Amazon Aurora. In this case, we focus on migrating from Oracle to the MySQL-compatible Amazon Aurora.”
  • Cross-Engine Database Replication Using AWS Schema Conversion Tool and AWS Database Migration Service – “AWS SCT makes heterogeneous database migrations easier by automatically converting source database schema. AWS SCT also converts the majority of custom code, including views and functions, to a format compatible with the target database.”
  • Database Migration—What Do You Need to Know Before You Start? – “Congratulations! You have convinced your boss or the CIO to move your database to the cloud. Or you are the boss, CIO, or both, and you finally decided to jump on the bandwagon. What you’re trying to do is move your application to the new environment, platform, or technology (aka application modernization), because usually people don’t move databases for fun.”
  • How to Script a Database Migration – “You can use the AWS DMS console or the AWS CLI or the AWS SDK to perform the database migration. In this blog post, I will focus on performing the migration with the AWS CLI.”

The documentation includes five helpful walkthroughs:

There’s also a hands-on lab (you will need to register in order to participate).

See You at a Summit
The DMS team is planning to attend and present at many of our upcoming AWS Summits and would welcome the opportunity to discuss your database migration requirements in person.

Jeff;

 

 

AWS Week in Review – February 27, 2017

by Jeff Barr | on | in Week in Review | | Comments

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for. Going forward I hope to bring back the other sections, as soon as I get my tooling and automation into better shape.

Monday

February 27

Tuesday

February 28

Wednesday

March 1

Thursday

March 2

Friday

March 3

Saturday

March 4

Sunday

March 5

Jeff;

 

Announcing the AWS Health Tools Repository

by Ana Visneski | on | in AWS Health, AWS Lambda | | Comments

Tipu Qureshi and Ram Atur join us today with really cool news about a Git repository for AWS Health / Personal Health Dashboard.

-Ana


Today, we’re happy to release the AWS Health Tools repository, a community-based source of tools to automate remediation actions and customize Health alerts.

The AWS Health service provides personalized information about events that can affect your AWS infrastructure, guides you through scheduled changes, and accelerates the troubleshooting of issues that affect your AWS resources and accounts.  The AWS Health API also powers the Personal Health Dashboard, which gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources. You can use Amazon CloudWatch Events to detect and react to changes in the status of AWS Personal Health Dashboard (AWS Health) events.

AWS Health Tools takes advantage of the integration of AWS Health, Amazon CloudWatch Events and AWS Lambda to implement customized automation in response to events regarding your AWS infrastructure. As an example, you can use AWS Health Tools to pause your deployments that are part of AWS CodePipeline when a CloudWatch event is generated in response to an AWS Health issue.

AWSHealthToolsArchitecture

The AWS Health Tools repository empowers customers to effectively utilize AWS Health events by tapping in to the collective ingenuity and expertise of the AWS community. The repository is free, public, and hosted on an independent platform. Furthermore, the repository contains full source code, allowing you to learn and contribute. We look forward to working together to leverage the combined wisdom and lessons learned by our experts and experts in the broader AWS user base.

Here’s a sample of the AWS Health tools that you now have access to:

To get started using these tools in your AWS account, see the readme file on GitHub. We encourage you to use this repository to share with the AWS community the AWS Health Tools you have written

-Tipu Qureshi and Ram Atur