AWS Official Blog
Amazon Cognito makes it easy for you to save user data such as app preferences in the AWS Cloud without writing any backend logic or managing any infrastructure. You can focus on creating a great app instead of worrying about creating server-side code to handle identity management, network state, storage, and sync.
Today we are giving you the ability to receive events (in the form of a Amazon Kinesis stream) when data managed by Cognito is updated or synchronized. You can use this stream to monitor user and app activity in real time. You can also route the event information to Amazon Redshift and analyze it using SQL queries or a wide selection of Business Intelligence (BI) tools.
I am already looking forward to my fourth re:Invent!
This year’s conference will take place from October 6 to 9 in Las Vegas. We’ll be announcing more details and opening up the registration system in May. You can sign up here in order to receive email updates.
We are already working on the services, presentations, and sessions, not to mention the entertainment and some other surprises for you.
Here are some pictures from 2014 to whet your appetite for 2015.
Andy Jassy’s Keynote
Robots in Action
Women in Technology Luncheon
Founder Chat with Werner Vogels
Solution Architects in the Expo Hall
The re:Play Party
So, save the date and I will see you in Vegas!
Continuing with my series of interviews from the AWS Pop-up Loft, I spoke with Vaibhav Mallya of OfferLetter.io. We chatted about his time at Amazon and Twitter, and his motivation for founding his own company.
You can listen to the full interview to learn a lot more. As a special bonus for podcast listeners, Vaibhav made a special offer to those who use his services.
PS – Visit the AWS Podcast page and subscribe to make sure that you don’t miss any episodes!
Amazon Kinesis is at home in situations where data arrives in a continuous stream. It is real-time and elastic and you can use it to reliably deliver any amount of data to your mission-critical applications.
Today we are making an important change to Kinesis. You can now retrieve records (data) immediately after a successful
PutRecordscall. Until now, records would become accessible to
GetRecordsafter a propagation delay that was typically between two and four seconds. You don’t need to make any changes to your application in order to benefit from this improvement (if you are using the KCL (Kinesis Client Library) you may want to configure it to poll more frequently in order to further reduce latency).
I believe that this improvement will make Kinesis an even better fit for many use cases. For example, some AWS customers use Kinesis as an integral part of their data ingestion and processing workflow. In this model, Kinesis functions as a high-performance elastic buffer between each processing step. Prior to today’s launch, the propagation delays present at each step could significantly increase the time that it takes to process raw data into actionable results. Now that the delays are a thing of the past, applications of this type can digest and process data faster than ever!
If you are using the Kinesis Client Library (KCL) the default polling interval is set to one poll per second. This is in accord with our recommended polling rate of one poll per shard per second per application and allows multiple applications to concurrently process a single stream without hitting the Kinesis limit of 5 calls to
GetRecordper second per shard. Going beyond this limit will invoke the SDK’s exponential back-off logic and the perceived propagation delay will increase. To reduce the propagation delay your consuming applications observe, you can change the default polling interval to a value between 200 and 250 milliseconds. Read the documentation on Kinesis Low Latency Processing to learn more about the configuration options and settings.
This feature is available now. If you are already running Kinesis applications you are already benefiting from it!
The user base for a successful mobile app or game can reach in to the hundreds of thousands, millions, or even tens of millions. In order to generate, manage, and understand growth at this scale, a data-driven approach is a necessity. The Amazon Mobile Analytics service can be a big help here. You can include the AWS Mobile SDK in your app, configure purchase and custom events, and then track usage metrics and KPIs in the AWS Management Console.
The built-in metrics include daily and monthly active users, new users, session and revenue information, and retention (see my blog post, New AWS Mobile Services, for more information and a complete list of metrics). The metrics are visible from within the AWS Management Console.
Beyond the Console
As your application becomes increasing successful, you may want to analyze the data in more sophisticated ways. Perhaps you want to run complex SQL queries on the raw data. Maybe you want to combine the data collected by the SDK with information that you captured from your backend or your website. Or, you might want to create a single, unified view of a user, even if they access the same app from more than one device.
Automatic Export to Amazon Redshift
Today we are giving you the ability to automatically export your mobile analytics event data to Amazon Redshift. This is in addition to the existing option to export the data to Amazon Simple Storage Service (S3) (the bucket is a waypoint between Amazon Mobile Analytics and Amazon Redshift; Auto Export to S3 is enabled when you Auto Export to Redshift).
To enable this feature, simply open up the Mobile Analytics Console and choose Auto Export to S3/Redshift from the menu:
Inspect the existing export settings and then click on Start Auto Export to Redshift:
The console will then ask you for the information that it needs to have in order to create the Amazon Redshift cluster:
You can choose to include attributes and metrics from your custom events and you can use Amazon CloudWatch to monitor the auto-export process:
You can also set the advanced options for the Amazon Redshift cluster:
When you are ready to go, simply click on the Create Export button:
You can see the export configuration for each of your mobile apps in the Console:
Inside the Stack
You can visit the CloudFormation Console to learn more about the stack and its components. The EC2 instance and the Amazon Redshift cluster are launched within a freshly created VPC (Virtual Private Cloud) and can be accessed using the IP address(es) specified in the Advanced Options.
Because the S3 bucket serves as a waypoint, you can stop the export to Amazon Redshift at any time and then restart it again later. This will repopulate the cluster with all of the historical data that was already exported to S3.
You can visit the Redshift Console to learn more about your cluster:
This feature is available now and you can start using it today. You will pay the usual charges for S3 storage, the EC2 instance, and the Amazon Redshift cluster. You can turn on the Auto Export feature from your Amazon Mobile Analytics Console. To learn more, visit the Mobile Analytics page and check out the documentation.
Let’s take a quick look at what happened in AWS-land last week:
- Tuesday, March 3 – Webinar – Loading and Analyzing Behavioral Data in Amazon Redshift – APN Partner Segment.
- Thursday, March 5 – Webinar – Oracle EBS Migration from a Hosted, Third-Party Data Center to AWS – APN Partner Apps Associates.
- Tuesday, March 10 – Live Event (San Francisco) – AWS Bootcamp – Getting Started with AWS.
- Tuesday, March 10 – Live Event (San Francisco) – Mobile Developer Loft Talks from Votap, VSCO, Crittercism, and Quixey.
- Wednesday, March 11 – Live Event (San Francisco) – Spark-as-a-Service, from AWS Partner Qubole.
- Monday, March 16 – Live Event (San Francisco) – AWS Essentials.
- Wednesday, March 18 – Webinar – Learn how Echo360 Moved Workloads to AWS with Zadara Storage – Echo360 and APN Partner Zadara Storage.
- Monday, March 23 – Live Event (San Francisco) – AWS Bootcamp – Getting Started with AWS.
- Tuesday, March 24 – Live Event (San Francisco) – AWS Loft Pitch Event and Spring Social.
- AWS Summits.
- Cloud Architects with Passion (Cloud Academy).
- Senior Site Reliability Engineer (Bashton – Manchester (UK), remote option).
- DevOps Engineer (MuleSoft – San Francisco).
- DevOps Engineer (Vidku – Minneapolis, focus on scaling and security, mobile skills appreciated).
- Junior Technology Advisor (ITM – Stuttgart, Germany).
- Senior Systems Administrator (Cobb Systems Group – Germantown, Maryland).
- DevOps Systems Adminstrator (Peaksware – Boulder, Colorado).
- SysOps / System Reliability Engineer (Capside – Barcelona, Spain).
- Cloud Optimization Consultant (CloudReach – London (UK)).
- Senior DevOps Engineer (ProQuest – Ann Arbor, Michigan).
- Cloud Developer (CloudReach – Edinburgh, Scotland).
- Cloud Systems Engineer (CloudReach – Ontario, Canada).
- Cloud Systems Engineer (CloudReach – Montreal, Canada).
- Cloud / Linux Systems Administrator (Ceilingest – Barcelon, Spain).
- AWS Careers.
We’re adding four new checks to AWS Trusted Advisor. As you may know, AWS Trusted Advisor inspects your AWS environment and looks for ways to save money, increase performance & reliability, and to help close security gaps. Today’s checks are for Elastic Load Balancing, with a focus on security and fault tolerance.
The following new checks are designed to help you to improve the security profile of your Elastic Load Balancers:
ELB Listener Security – This check looks for load balancers that do not use recommended security configurations or protocols. It checks to see if the latest version of applicable security policies are in place and verifies that only recommended ciphers and protocols are used.
ELB Security Groups – This check looks for load balancers that do not have a security group, or that have a security group which allows access to ports that are not configured for the load balancer.
Fault Tolerance Checks
The following new checks are designed to help you to make your Elastic Load Balancing configuration more fault tolerant:
Cross-Zone Load Balancing – This check looks for load balancers that do not have cross-zone load balancing enabled. This feature makes it easier for you to deploy and manage applications that run across more than one Availability Zone.
ELB Connection Draining – This check looks for load balancers that do not have connection draining enabled. With this feature enabled, the load balancer will stop sending new requests to instances that are deregistering (in-flight requests will continue to be served).
These new checks are available now and you can start to benefit from them today!
A few months ago, I discussed 20 new features from Amazon Redshift, our petabyte-scale, fully managed data warehouse service that lets you get started for free and costs as little as $1,000 per TB per year.
The Amazon Redshift team has released over 100 new features since the launch, with a focus on price, performance, and ease of use. Customers are continuing to unlock powerful analytics using the service, as you can see from recent posts by IMS Health, Phillips, and GREE.
My colleague Tina Adams sent me a guest post to share some more Amazon Redshift updates. I’ll let her take over from here!
I am happy to be able to announce two new Amazon Redshift features today!
The first, custom ODBC and JDBC drivers, now makes it easier and faster to connect to and query Amazon Redshift from your BI tool of choice. The second, Query Visualization in the Console, helps you optimize your queries to take full advantage of Amazon Redshift’s Massively Parallel Processing (MPP), columnar architecture.
Custom Amazon Redshift Drivers
We have launched custom JDBC and ODBC drivers optimized for use with Amazon Redshift, making them easier to use, more reliable, and more performant than drivers available on PostgreSQL.org.
Informatica, Microstrategy, Pentaho, Qlik, SAS, and Tableau will be supporting the new drivers with their BI and ETL solutions. Note that Amazon Redshift will continue to support the latest PostgreSQL ODBC drivers as well as JDBC 8.4-703, although JDBC 8.4-703 is no longer being updated. If you need to distribute these drivers to your customers or other third parties, please contact us at email@example.com so that we can arrange an appropriate license to allow this.
Our JDBC driver features JDBC 4.1 and 4.0 support, up to a 35% performance gain over open source options, keep alive by default, and improved memory management. Specifically, you can set the number of rows to hold in memory and control memory consumption on a statement by statement basis.
Our ODBC driver is available for Linux, Windows, and Mac OS X. The driver features ODBC 3.8 support, better Unicode data and password handling, default keep alive, and a single-row mode that is more memory efficient than using DECLARE/FETCH. The driver is also backwards compatible with ODBC 2.x, supporting both Unicode and 64-bit applications. The driver includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.
Query Visualization in the Console
The Console now helps you visualize the time Amazon Redshift spent processing different parts of your query, helping you optimize complex queries more quickly and easily. By going to the Actual tab of the new Query Execution Details section, you can view processing times tied to each stage or “plan node” (e.g. merge, sort, join) of a query execution plan.
This information is pulled from system tables and views, such as STL_EXPLAIN and SVL_QUERY_REPORT. In addition to identifying the parts of your query that took a long time to run, you can also see skew in execution times across your cluster. Plan nodes that caused an alert in the SVL_ALERT_EVENT_LOG system view will show a red exclamation, which you can click to see a recommended solution. For more information please see Analyzing Query Execution.
You can also click on each plan node to view the underlying steps and a detailed comparison of estimated and actual execution time:
There’s a lot more to come from Amazon Redshift, so stay tuned. To get the latest feature announcements, log in to the Amazon Redshift Forum and subscribe to the Amazon Redshift Announcements thread. You can also use the Developer Guide History and the Management Guide History to track product updates.
— Tina Adams, Senior Product Manager
After some fits and starts, I am now recording and producing the AWS Podcast on a regular basis. I am committed to releasing at least one new episode every week, with a stretch goal of two!
Last week I spent some a couple of days at the AWS Pop-up Loft in San Francisco!
With my trusty Zoom H5 recorder nearby, I sat down in the (relatively) quiet basement of the Loft and interviewed a total of six startups and AWS partners. I’ll be editing and publishing them as quickly as possible.
We talked about what Cloud Academy does, and how it relates to AWS. We also discussed his background, his motivation for founding the company, and his experience coming to the US from Italy.
You can listen to the full interview to learn more. At the end of the interview, Stefano announced a special promotion for fans of the AWS Podcast. Simply visit http://promo.cloudacademy.com/ to receive a 30% discount on the Cloud Academy PRO Plan.
The 64-bit Unbreakable Enterprise Kernel was designed to provide scalability, reliability, and performance improvements for demanding enterprise applications, including Enterprise Oracle workloads.
The AMI (HVM) makes use of EBS volumes for storage and is recommended for use on the c3.large instance type (it can also be run on many other instance types).
Oracle Linux is available today in the US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (Brazil) regions. Software usage fees start at $0.06 / hour.