AWS Official Blog

  • EC2 Instance Update – X1 (SAP HANA) & T2.Nano (Websites)

    by Jeff Barr | on | in Amazon EC2, re:Invent |

    AWS customers love to share their plans and their infrastructure needs with us. We, in turn, love to listen and to do our best to meet those needs. A look at the EC2 instance history should tell you a lot about our ability to listen to our customers and to respond with an increasingly broad range of instances (check out the EC2 Instance History for a detailed look).

    Lately, we have been hearing two types of requests, both driven by some important industry trends:

    • On the high end, many of our enterprise customers are clamoring for instances that have very large amounts of memory. They want to run SAP HANA and other in-memory databases, generate analytics in real time, process giant graphs using Neo4j or Titan , or create enormous caches.
    • On the low end, other customers need a little bit of processing power to host dynamic websites that usually get very modest amounts of traffic,  or to run their microservices or monitoring systems.

    In order to meet both of these needs, we are planning to launch two new EC2 instances in the coming months. The upcoming X1 instances will have loads of memory; the t2.nano will provide that little bit of processing power, along with bursting capabilities similar to those of its larger siblings.

    X1 – Tons of Memory
    X1 instances will feature up to 2 TB of memory, a full order of magnitude larger than the current generation of high-memory instances. These instances are designed for demanding enterprise workloads including production installations of SAP HANA, Microsoft SQL Server, Apache Spark, and Presto.

    The X1 instances will be powered by up to four Intel® Xeon® E7 processors. The processors have high memory bandwidth and large L3 caches, both designed to support high-performance, memory-bound applications. With over 100 vCPUs, these instances will be able to handle highly concurrent workloads with ease.

    We expect to have the X1 available in the first half of 2016. I’ll share pricing and other details at launch time.

    T2.Nano – A Little (Burstable) Processing Power
    The T2 instances provide a baseline level of processing power, along with the ability to save up unused cycles (“CPU Credits”) and use them when the need arises (read about New Low Cost EC2 Instances with Burstable Performance to learn more). We launched the t2.micro, t2.small, and t2.medium a little over a year ago. The burstable model has proven to be extremely popular with our customers. It turns out that most of them never actually consume all of their CPU Credits and are able to run at full core performance. We extended this model with the introduction of t2.large just a few months ago.

    The next step is to go in the other direction. Later this year we will introduce the t2.nano instance.  You’ll get 1 vCPU and 512 MB of memory, and the ability run at full core performance for over an hour on a full credit balance. Each newly launched t2.nano starts out with sufficient CPU Credits to allow you to get started as quickly as possible.

    Due to the burstable performance, these instances are going to be a great fit for websites that usually get modest amounts of traffic. During those quiet times, CPU Credits will accumulate, providing a reserve that can be drawn upon when traffic surges.

    Again, I’ll share more info as we get closer to the launch!


  • Amazon Inspector – Automated Security Assessment Service

    by Jeff Barr | on | in Amazon Inspector, re:Invent |

    As systems, configurations, and applications become more and more complex, detecting potential security and compliance issues can be challenging. Agile development methodologies can shorten the time between “code complete” and “code tested and deployed,” but can occasionally allow vulnerabilities to be introduced by accident and overlooked during testing. Also, many organizations do not have enough security personnel on staff to perform time-consuming manual checks on individual servers and other resources.

    New Amazon Inspector
    Today we are announcing a preview of the new Amazon Inspector. As the name implies, it analyzes the behavior of the applications that you run in AWS and helps you to identify potential security issues.

    Inspector works on an application-by-application basis. You start by defining a collection of AWS resources that make up your application:

    Then you create and run a security assessment of the application:

    The EC2 instances and other AWS resources that make up your application are identified by tags. When you create the assessment, you also define a duration (15 minutes, 1 / 8 / 12 hours, or 1 day).

    During the assessment, an Inspector Agent running on each of the EC2 instances that play host to the application monitors network, file system, and process activity. It also collects other information including details of communication with AWS services, use of secure channels, network traffic between instances, and so forth. This information provides Inspector with a complete picture of the application and its potential security or compliance issues.

    After the data has been collected, it is correlated, analyzed, and compared to a set of built-in security rules. The rules include checks against best practices, common compliance standards, and vulnerabilities and represent the collective wisdom of the AWS security team. The members of this team are constantly on the lookout for new vulnerabilities and best practices, which they codify into new rules for Inspector.

    The initial launch of Inspector will include the following sets of rules:

    • Common Vulnerabilities and Exposures
    • Network Security Best Practices
    • Authentication Best Practices
    • Operating System Security Best Practices
    • Application Security Best Practices
    • PCI DSS 3.0 Assessment

    Issues identified by Inspector (we call them “findings”) are gathered together and grouped by severity in a comprehensive report.

    You can access the Inspector from the AWS Management Console, AWS Command Line Interface (CLI), or API.

    More to Come
    I plan to share more information about Inspector shortly after re:Invent wraps up and I have some time to catch my breath, so stay tuned!

    — Jeff;

  • AWS Config Rules – Dynamic Compliance Checking for Cloud Resources

    by Jeff Barr | on | in AWS Config, re:Invent |

    The flexible, dynamic nature of the AWS cloud gives developers and admins the flexibility to launch, configure, use, and terminate processing, storage, networking, and other resources as needed. In any fast-paced agile environment, security guidelines and policies can be overlooked in the race to get a new product to market before the competition.

    Imagine that you had the ability to verify that existing and newly launched AWS resources conformed to your organization’s security guidelines and best practices without creating a bureaucracy or spending your time manually inspecting cloud resources.

    Last year I announced that you could Track AWS Resource Configurations with AWS Config. In that post I showed you how AWS Config captured the state of your AWS resources and the relationships between them. I also discussed Config’s auditing features, including the ability to select a resource and then view a timeline of configuration changes on a timeline.

    New AWS Config Rules
    Today we are extending Config with a powerful new rule system. You can use existing rules from AWS and from partners, and you can also define your own custom rules. Rules can be targeted at specific resources (by id), specific types of resources, or at resources tagged in a particular way. Rules are run when those resources are created or changed, and can also be evaluated on a periodic basis (hourly, daily, and so forth).

    Rules can look for any desirable or undesirable condition. For example, you could:

    • Ensure that EC2 instances launched in a particular VPC are properly tagged.
    • Make sure that every instance is associated with at least one security group.
    • Check to make sure that port 22 is not open in any production security group.

    Each custom rule is simply an AWS Lambda function. When the function is invoked in order to evaluate a resource, it is provided with the resource’s Configuration Item.  The function can inspect the item and can also make calls to other AWS API functions as desired (based on permissions granted via an IAM role, as usual). After the Lambda function makes its decision (compliant or not) it calls the PutEvaluations function to record the decision and returns.

    The results of all of these rule invocations (which you can think of as compliance checks) are recorded and tracked on a per-resource basis and then made available to you in the AWS Management Console. You can also access the results in a report-oriented form, or via the Config API.

    Let’s take a quick tour of AWS Config Rules, with the proviso that some of what I share with you will undoubtedly change as we progress toward general availability. As usual, we will look forward to your feedback and will use it to shape and prioritize our roadmap.

    Using an Existing Rule
    Let’s start by using one of the rules that’s included with Config. I open the Config Console and click on Add Rule:

    I browse through the rules and decide to start with instances-in-vpc. This rule verifies that an EC2  instance belong to a VPC, with the option to check that it belongs to a specific VPC. I click on the rule and customize it as needed:

    I have a lot of choices here.  The Trigger type tells Config to run the rule when the resource is changed, or periodically. The Scope of changes tells Config which resources are of interest. The scope can be specified by resource type (with an optional identifier) by tag name, or by a combination of tag name and value. If I am checking EC2 instances, I can trigger on any of the following:

    • All EC2 instances.
    • Specific EC2 instances, identified by a resource identifier.
    • All resources tagged with the key “Department.”
    • All resources tagged with the key “Stage” and the value “Prod.”

    The Rule parameters allows me to pass additional key/value pairs to the Lambda function. The parameter names, and their meaning, will be specific to the function. In this case, supplying a value for the vpcid parameter tells the function to verify that the EC2 instance is running within the specified VPC.

    The rule goes in to effect after I click on Save. When I return to the Rules page I can see that my AWS configuration is now noncompliant:

    I can investigate the issue by examining the Config timeline for the instance in question:

    It turns out that this instance has been sitting around for a while (truth be told I forgot about it). This is a perfect example of how useful the new Config Rules can be!

    I can also use the Config Console to look at the compliance status of all instances of a particular type:

    Creating a New Rule
    I can create a new rule using any language supported by Lambda. The rule receives the Configuration Item and the rule parameters that I mentioned above, and can implement any desired logic.

    Let’s look at a couple of excerpts from a sample rule. The rule applies to EC2 instances, so it checks to see if was invoked on one:

    function evaluateCompliance(configurationItem, ruleParameters) {
        if (configurationItem.resourceType !== 'AWS::EC2::Instance') {
            return 'NOT_APPLICABLE';
        } else {
            var securityGroups = configurationItem.configuration.securityGroups;
            var expectedSecurityGroupId = ruleParameters.securityGroupId;
            if (hasExpectedSecurityGroup(expectedSecurityGroupId, securityGroups)) {
                return 'COMPLIANT';
            } else {
                return 'NON_COMPLIANT';

    If the rule was invoked on an EC2 instance, it checks to see if any one of a list of expected security groups is attached to the instance:

    function hasExpectedSecurityGroup(expectedSecurityGroupId, securityGroups) {
        for (var i = 0; i < securityGroups.length; i++) {
            var securityGroup = securityGroups[i];
            if (securityGroup.groupId === expectedSecurityGroupId) {
                return true;
        return false;

    Finally, the rule stores the result of the compliance check  by calling the Config API’s putEvaluations function:

    config.putEvaluations(putEvaluationsRequest, function (err, data) {
        if (err) {
        } else {

    The rule can record results for the item being checked or for any related item. Let’s say you are checking to make sure that an Elastic Load Balancer is attached only to a specific kind of EC2 instance. You could decide to report compliance (or noncompliance) for the ELB or for the instance, depending on what makes the most sense for your organization and your compliance model. You can do this for any resource type that is supported by Config.

    Here’s how I create a rule that references my Lambda function:

    On the Way
    AWS Config Rules are being launched in preview form today and you can sign up now.  Stay tuned for additional information!


    PS – re:Invent attendees can attend session SEC 314: Use AWS Config Rules to Improve Governance of Your AWS Resources (5:30 PM on October 8th in Palazzo K).

  • Amazon RDS Update – MariaDB is Now Available

    by Jeff Barr | on | in Amazon Relational Database Service, re:Invent |

    We launched the Amazon Relational Database Service (RDS) almost six years ago, in October of 2009. The initial launch gave you the power to launch a MySQL database instance from the command line. From that starting point we have added a multitude of features, along with support for the SQL Server, Oracle Database, PostgreSQL, and Amazon Aurora databases. We have made RDS available in every AWS region, and on a very wide range of database instance types. You can now run RDS in a geographic location that is well-suited to the needs of your user base, on hardware that is equally well-suited to the needs of your application.

    Hello, MariaDB
    Today we are adding support for the popular MariaDB database, beginning with version 10.0.17. This engine was forked from MySQL in 2009, and has developed at a rapid clip ever since, adding support for two storage engines (XtraDB and Aria) and other leading-edge features. Based on discussions with potential customers, some of the most attractive features include parallel replication and thread pooling.

    As is the case with all of the databases supported by RDS, you can launch MariaDB from the Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, via the RDS API, or from a CloudFormation template.

    I started out with the CLI and launched my database instance like this:

    $ rds-create-db-instance jeff-mariadb-1 \
      --engine mariadb \
      --db-instance-class db.r3.xlarge \
      --db-subnet-group-name dbsub \
      --allocated-storage 100 \
      --publicly-accessible false \
      --master-username root --master-user-password PASSWORD

    Let’s break this down, option by option:

    • Line 1 runs the rds-create-db-instance command and specifies the name (jeff-mariadb-1) that I have chosen for my instance.
    • Line 2 indicates that I want to run the MariaDB engine, and line 3 says that I want to run it on a db.r3.xlarge instance type.
    • Line 4 points to the database subnet group that  I have chosen for the database instance. This group lists the network subnets within my VPC (Virtual Private Cloud) that are suitable for my instance.
    • Line 5 requests 100 gigabytes of storage, and line 6 specifies that I don’t want the database instance to have a publicly accessible IP address.
    • Finally, line 7 provides the name and credentials for the master user of the database.

    The command displays the following information to confirm my launch:

    DBINSTANCE  jeff-mariadb-1  db.r3.xlarge  mariadb  100  root  creating  1  ****  db-QAYNWOIDPPH6EYEN6RD7GTLJW4  n  10.0.17  general-public-license  n  standard  n
          VPCSECGROUP  sg-ca2071af  active
    SUBNETGROUP  dbsub  DB Subnet for Testing  Complete  vpc-7fd2791a
          SUBNET  subnet-b8243890  us-east-1e  Active
          SUBNET  subnet-90af64e7  us-east-1b  Active
          SUBNET  subnet-b3af64c4  us-east-1b  Active
          PARAMGRP  default.mariadb10.0  in-sync
          OPTIONGROUP  default:mariadb-10-0  in-sync

    The RDS CLI includes a full set of powerful, high-level commands, all documented here. For example, I can create read replicas (rds-create-db-instance-read-replicas) and take snapshot backups (rds-create-db-snapshot) in minutes.

    Here’s how I would launch the same instance using the AWS Management Console:

    Get Started Today
    You can launch RDS database instances running MariaDB today in all AWS regions. Supported database instance types include M3 (standard), R3 (memory optimized), and T2 (standard).


  • AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances

    by Jeff Barr | on | in AWS Import/Export, re:Invent |

    Even though high speed Internet connections (T3 or better) are available in many parts of the world, transferring terabytes or petabytes of data from an existing data center to the cloud remains challenging. Many of our customers find that the data migration aspect of an all-in move to the cloud presents some surprising issues. In many cases, these customers are planning to decommission their existing data centers after they move their apps and their data; in such a situation, upgrading their last-generation networking gear and boosting connection speeds makes little or no sense.

    We launched the first-generation AWS Import/Export service way back in 2009. As I wrote at the time, “Hard drives are getting bigger more rapidly than internet connections are getting faster.” I believe that remains the case today. In fact, the rapid rise in Big Data applications, the emergence of global sensor networks, and the “keep it all just in case we can extract more value later” mindset have made the situation even more dire.

    The original AWS Import/Export model was built around devices that you had to specify, purchase, maintain, format, package, ship, and track. While many AWS customers have used (and continue to use) this model, some challenges remain. For example, it does not make sense for you to buy multiple expensive devices as part of a one-time migration to AWS. In addition to data encryption requirements and device durability issues, creating the requisite manifest files for each device and each shipment adds additional overhead and leaves room for human error.

    New Data Transfer Model with Amazon-Owned Appliances
    After gaining significant experience with the original model, we are ready to unveil a new one, formally known as AWS Import/Export Snowball. Built around appliances that we own and maintain, the new model is faster, cleaner, simpler, more efficient, and more secure. You don’t have to buy storage devices or upgrade your network.

    Snowball is designed for customers that need to move lots of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request one or more from the AWS Management Console and wait a few days for the appliance to be delivered to your site. If you want to import a lot of data, you can order one or more Snowball appliances and run them in parallel.

    The new Snowball appliance is purpose-built for efficient data storage and transfer. It is rugged enough to withstand a 6 G jolt, and (at 50 lbs) light enough for one person to carry. It is entirely self-contained, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is weather-resistant and serves as its own shipping container; it can go from your mail room to your data center and back again with no packing or unpacking hassle to slow things down. In addition to being physically rugged and tamper-resistant, AWS Snowball detects tampering attempts. Here’s what it looks like:

    Once you receive a Snowball, you plug it in, connect it to your network, configure the IP address (you can use your own or the device can fetch one from your network using DHCP), and install the AWS Snowball client. Then you return to the Console to download the job manifest and a 25 character unlock code. With all of that info in hand you start the appliance with one command:

    $ snowball start -i DEVICE_IP -m PATH_TO_MANIFEST -u UNLOCK_CODE

    At this point you are ready to copy data to the Snowball. The data will be 256-bit encrypted on the host and stored on the appliance in encrypted form. The appliance can be hosted on a private subnet with limited network access.

    From there you simply copy up to 50 terabytes of data to the Snowball and disconnect it (a shipping label will automatically appear on the E Ink display), and ship it back to us for ingestion. We’ll decrypt the data and copy it to the S3 bucket(s) that you specified when you made your request. Then we’ll sanitize the appliance in accordance with National Institute of Standards and Technology Special Publication 800-88 (Guidelines for Media Sanitization).

    At each step along the way, notifications are sent to an Amazon Simple Notification Service (SNS) topic and email address that you specify. You can use the SNS notifications to integrate the data import process into your own data migration workflow system.

    Creating an Import Job
    Let’s step through the process of creating an AWS Snowball import job from the AWS Management Console. I create a job by entering my name and address (or choosing an existing one if I have done this before):

    Then I give the job a name (mine is import-photos), and select a destination (an AWS region and one or more S3 buckets):

    Next, I set up my security (an IAM role and a KMS key to encrypt the data):

    I’m almost ready! Now I choose the notification options. I can create a new SNS topic and create an email subscription to it, or I can use an existing topic. I can also choose the status changes that are of interest to me:

    After I review and confirm my choices, the job becomes active:

    The next step (which I didn’t have time for in the rush to re:Invent) would be to receive the appliance, install it and copy my data over, and ship it back.

    In the Works
    We are launching AWS Import/Export Snowball with import functionality so that you can move data to the cloud. We are also aware of many interesting use cases that involve moving data the other way, including large-scale data distribution, and plan to address them in the future.

    We are also working on other enhancements including continuous, GPS-powered chain-of-custody tracking.

    Pricing and Availability
    There is a usage charge of $200 per job, plus shipping charges that are based on your destination and the selected shipment method. As part of this charge, you have up to 10 days (starting the day after delivery) to copy your data to the appliance and ship it out. Extra days are $15 each.

    You can import data to the US Standard and US West (Oregon) regions, with more on the way.


  • Amazon Kinesis Firehose – Simple & Highly Scalable Data Ingestion

    by Jeff Barr | on | in Amazon Kinesis, re:Invent |

    Two years ago we introduced Amazon Kinesis, which we now call Amazon Kinesis Streams, to allow you to build applications that collect, process, and analyze streaming data with very high throughput. We don’t want you to have to think about building and running a fleet of ingestion servers or worrying about monitoring, scaling, or reliable delivery.

    Amazon Kinesis Firehose was purpose-built to make it even easier for you to load streaming data into AWS. You simply create a delivery stream, route it to an Amazon Simple Storage Service (S3) bucket and/or a Amazon Redshift table, and write records (up to 1000 KB each) to the stream. Behind the scenes, Firehose will take care of all of the monitoring, scaling, and data management for you.

    Once again (I never tire of saying this), you can spend more time focusing on your application and less time on your infrastructure.

    Inside the Firehose
    In order to keep things simple, Firehose does not interpret or process the raw data in any way. You simply create a delivery stream and write data records to it. After any requested compression (client-side) and encryption (server-side), the records are written to an S3 bucket that you designate. As my colleague James Hamilton likes to say (in other contexts), “It’s that simple.” You can even control the buffer size and the buffer interval for the stream if necessary.

    If your client code isolates individual logical records before sending them to Firehose, it can add a delimiter. Otherwise, you can identify record boundaries later, once the data is in the cloud.

    After your data is stored in S3, you have multiple options for analyzing and processing it. For example, you can attach an AWS Lambda function to the bucket and process the objects as they arrive. Or, you can point your existing Amazon EMR jobs at the bucket and process the freshest data, without having to make any changes to the jobs.

    You can also use Firehose to route your data to an Amazon Redshift cluster. After Firehose stores your raw data in S3 objects, it can invoke a Redshift COPY command on each object. This command is very flexible and allows you to import and process data in multiple formats (CVS, JSON, AVRO, and so forth), isolate and store only selected columns, convert data from one type to another, and so forth.

    Firehose From the Console
    You can do all of this from the AWS Management Console, the AWS Command Line Interface (CLI), and via the Firehose APIs.

    Let’s set up a delivery stream using the Firehose Console. I simply open it up and click on Create Delivery Stream. Then I give my stream a name, pick an S3 bucket (or create a new one), and set up an IAM role so that Firehose has permission to write to the bucket:

    I can configure the latency and compression for the delivery stream. I can also choose to encrypt the data using one of my AWS Key Management Service (KMS) keys:

    Once my stream is created, I can see it from the console.

    Publishing to a Delivery Stream
    Here is some simple Java code to publish a record (the string “some data”) to my stream:

    PutRecordRequest putRecordRequest = new PutRecordRequest(); 
    String data = "some data" + "\n"; // add \n as a record separator 
    Record record = new Record(); 

    And here’s a CLI equivalent:

    $ aws firehose put-record --delivery-stream-name incoming-stream --record Data="some data\n"

    We also supply an agent that runs on Linux. It can be configured to watch one more log files and to route them to Firehose.

    Monitoring Kinesis Firehose Delivery Streams
    You can monitor the CloudWatch metrics for each of your delivery streams from the Console:

    By the Numbers
    Individual delivery streams can scale to accommodate multiple gigabytes of data per hour. By default, each stream can support 2500 calls to PutRecord or PutRecordBatch per second and you can have up to 5 streams per AWS account (both of these values are administrative limits that can be raised upon request, so just ask if you need more).

    This feature is available now and you can start using it today. Pricing is based on the volume of data ingested via each Firehose.

    — Jeff;


  • Amazon QuickSight – Fast & Easy to Use Business Intelligence for Big Data at 1/10th the Cost of Traditional Solutions

    by Jeff Barr | on | in re:Invent |

    Over the last couple of years, the process of collecting, uploading, storing, and processing data on AWS has become faster, simpler, and increasingly comprehensive. We have delivered a broad set of data-centric services that tackle many of the issues faced by our customers. For example:

    • Managing Databases is Painful and Difficult – Amazon Relational Database Service (RDS) addresses many of the pain points and provides many ease-of-use features.
    • SQL Databases do not Work Well at ScaleAmazon DynamoDB provides a fully managed, NoSQL model that has no inherent scalability limits.
    • Hadoop is Difficult to Deploy and ManageAmazon EMR can launch managed Hadoop clusters in minutes.
    • Data Warehouses are Costly, Complex, and SlowAmazon Redshift provides a fast, fully-managed petabyte-scale data warehouse at 1/10th the cost of traditional solutions.
    • Commercial Databases are Punitive and ExpensiveAmazon Aurora combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of their open source siblings.
    • Streaming Data is Difficult to CaptureAmazon Kinesis facilitates real-time data processing of data streams at terabyte scale.

    With the services listed above as a base, many customers are ready to take the next step. They are able to collect, upload, process, and store the data. Now they want to analyze and visualize it, and they want to do it the AWS way—easily and cost-effectively at world scale!

    In the past, Business Intelligence required an incredible amount of undifferentiated heavy lifting. You had to pay for, set up and run the infrastructure and the software, manage scale (while users fret), and hire consultants at exorbitant rates to model your data. After all that your users were left to struggle with complex user interfaces for data exploration while simultaneously demanding support for their mobile devices. Access to NoSQL and streaming data? Good luck with that!

    Introducing QuickSight
    Today we are announcing Amazon QuickSight. You get very fast, easy to use business intelligence for your big data needs at 1/10th the cost of traditional on-premises solutions. This cool new product will be available in preview form later this month.

    After talking to many customers about their Business Intelligence (BI) needs, we believe that QuickSight will be able to handle many types of data-intensive workloads including ad targeting, customer segmentation, forecasting & planning, marketing & sales analytics, inventory & shipment tracking, IoT device stream management, and clickstream analysis. You’ve got the data and you’ve got the questions. Now you want the insights!

    QuickSight lets you get started in minutes. You log in, point to a data source, and begin to visualize your data. As you do so, you’ll benefit from the following features:

    Access to Data  Sources -QuickSight can access data from many different sources, both on-premises and in the cloud. There’s built-in support for Redshift, RDS, Amazon Aurora, EMR, DynamoDB, Kinesis, S3, MySQL, Oracle, SQL Server, PostgreSQL, and flat files. Connectors allow access to data stored in third-party sources such as Salesforce.

    Fast Calculation – QuickSight is built around SPICE (the Super-fast, Parallel, In-memory Calculation Engine). We built it from the ground up to run in the cloud and to deliver a fast, interactive data visualization experience.

    Ease of Use – QuickSight auto-discovers your AWS data sources and makes it easy for you to connect to them. As you select tables and fields, it recommends the most appropriate types of graphs and other visualizations. You can share your visualizations with your colleagues and you can assemble several visualizations in order to tell a story with data. You can even embed your reports in applications and websites.

    Effortless Scale – QuickSight provides fast analytics and visualization while scaling to handle to hundreds of thousands of users and terabytes of data per organization.

    Low Cost – All things considered, QuickSight will provide you with robust Business Intelligence at 1/10th the cost of on-premises solutions from the old guard.

    Partner-Ready – QuickSight provides a simple SQL-like interface to enable BI tools from AWS Partners to access data stored in SPICE so that customers can use the BI tools they are familiar with and get even faster performance at scale. We’re already working with several partners including Domo, Qlik, Tableau, and Tibco. I’ll have more news on that front before too long.

    Take the QuickSight Tour
    Let’s take a tour through QuickSight. As a quick reminder, we’re still putting the finishing touches on the visuals and the images below are subject to change. Each organization will have their own QuickSight link. After the first user from an organization logs in, they have the ability to invite their coworkers.

    After I log in, QuickSight discovers available data sources and lets me connect to the one I want with a couple of clicks:

    After that I select a table from the data source:

    And then the field(s) of interest:

    I select the product category and sales amount in order to view sales by category:

    The Fitness value looks interesting and I want to learn more! I simply click on it and choose to focus:

    And that’s what I see:

    Now I want to know more about what’s going on, so I drill in to the sub-categories with a click:

    And here’s what I see. It looks like weight accessories, treadmills, and fitness monitors are my best sellers:

    After I create the visualization, I can save it to a storyboard:


    This quick tour has barely scratched the surface of what QuickSight can do, but I do want to keep some surprises in reserve for the official launch (currently scheduled for early 2016). Between now and then I plan to share more details about the mobile apps, the storyboards, and so forth.

    QuickSight Pricing
    I have alluded to expensive, inflexible, old-school pricing a couple of times already. We want to make QuickSight affordable to organizations of all sizes. There will be two service options, Standard and Enterprise. The Enterprise Edition provides up to twice the throughput & fine-grained access control, supports encryption at rest, integrates with your organization’s Active Directory, and includes a few other goodies as well. Pricing is as follows:

    • Standard Edition:
      • $12 per user per month with no usage commitment.
      • $9 per user month with a one-year usage commitment.
      • $0.25 / gigabyte / month for SPICE storage (beyond 10 gigabytes).
    • Enterprise Edition:
      • $24 per user per month with no usage commitment.
      • $18 per user per month with a one-year usage commitment.
      • $0.38 / gigabyte / month for SPICE storage (beyond 10 gigabytes).

    Coming Soon
    If you are interested in evaluating QuickSight for your organization, you can sign up for the preview today. We’ll be opening it up to an initial set of users later this month, and scaling up after that. As usual, we’ll start in the US East (Northern Virginia) region, expand quickly to US West (Oregon) and Europe (Ireland), and then shoot for the remaining AWS regions in time for the full-blown launch in 2016.


  • AWS Global Partner Summit – Report from re:Invent 2015

    by Jeff Barr | on | in AWS Partner Network, re:Invent |

    The AWS Global Partner Summit just wrapped up! This annual event is held the day before the main AWS re:Invent keynotes, breakout sessions, and other events.

    During the event, members of the AWS Partner Network (APN) had the opportunity to hear from senior AWS leaders. Traditionally, the talks have provided insights into the future direction of AWS and of the APN itself. The following leaders spoke at this year’s event:

    • Terry Wise – Vice President, Channels & Alliances.
    • Adam Selipsky – Vice President, Sales & Marketing.
    • Scott Wiltamuth – Vice President, Developer Productivity & Tools.
    • Andy Jassy – Senior Vice President, AWS.

    Members of the AWS Partner Network (APN) also got to hear from AWS customers and partners:

    • Colin Bodell – CTO of AWS customer Time, Inc (case study).
    • Pam Murphy – COO of AWS partner Infor (case study).

    Summit Theme
    The Summit theme this year was “The Power of Transformation.”  This theme was chosen because many of our customers and partners report that the transformation enabled by the AWS Cloud goes beyond IT and beyond business as usual, to the extent that it is transforming their daily lives in profound ways. This includes the ability to harness renewable energy,  make fundamental advances in life sciences, explore outer space, and fuel the digital media revolution.

    APN Program Updates
    We made lots of announcements at this year’s Summit. Here’s a summary, along with links to more information on the AWS Partner Network Blog.

    • New DevOps Competency for Consulting and Technology partners (launched with 27 qualified partners).
    • Updated APN Benefits and Requirements for Consulting and Technology partners.
    • Over 40 partners have passed the rigorous MSP third-party validation audit.
    • We now have over 200 partners in our SaaS program.
    • We plan to launch new Cloud Migration and IoT Competencies in the coming months.
    • We announced our 2016 APN Program Benefits and Requirements, including an emphasis on benefits for Competency and MSP Partners.

    New Premier Consulting Partners
    We announced that the following members of the APN have now ascended to Premier Consulting Partner status (congratulations):

    • North America:
      • Apps Associates
      • Cloud Technology Partners
      • Mobiquity
      • Pariveda Solutions
      • REAN Cloud
      • TriNimbus
    • Latin America:
      • CredibiliT
      • Soluciones Orion
    • Asia / Pacific:
      • Blazeclan
      • GS Neotek
      • Megazone
    • Japan:
      • TIS Inc.
    • Europe, Middle East, Africa:
      • Edifixio
      • Latinedge – CloudMas

    To learn more, read 2016 AWS Premier Consulting Partners.

    New All-In Technology Partners
    An all-in technology partner makes a public commitment to AWS as their strategic cloud platform. Here are the newest all-in technology partners:

    • Ayla Networks
    • eFront
    • Freshdesk
    • Juniper
    • TechnologyOne

    To learn more, read New All-in Technology Partners Announced at re:Invent.

    AWS Leadership Partner Recognition
    We measure our personal progress as Amazon employees by evaluating how well we understand, adhere to, and demonstrate our leadership principles. As part of the Summit, we also recognize and offer our congratulations to select APN members who have exhibited superior performance with respect to four of these principles. This year we would like to recognize the following partners:

    • Customer Obsession -Tableau, Itoc, Day 1 Solutions, Blazeclan Technologies, Ambab Infotech Pvt. Ltd., Lemongrass, Dedlaus Prime, Pega, F5, Adobe, and Sophos.
    • Learn and be Curious -Hitachi Solutions, Accenture, NEC Corporation (Consulting), FUJITSU LIMITED, Cloudreach, Cognizant, Tata Consultancy Services (TCS), FPT Software (Singapore), GS Neotek (Korea), Clearscale, Rean, and Rackspace.
    • Think Big -SAS Institute, Freshdeck, Infor, Informatica, Wipro, Saison Information Systems, Epic, SAP, and Trend Micro.
    • Invent & Simplify -Alert Logic, Ansys, Aptible, Splunk, Twilio, 47 Lining, Flux7, Avid, Sumologic, Wowza, Chef, and Racemi.

    APN at re:Invent
    If you are in Las Vegas for re:Invent, be sure to visit the AWS Partner Pavilion. If you are a member of APN or aspire to be one, be sure to read the AWS Partners’ Guide to re:Invent 2015.

    I want to also highlight a great tool and resource that is going to be available at re:Invent – the interactive, self-paced AWS Partner Solutions Explorer.

    Based on the your needs, the Explorer will search through our database of thousands of AWS Partners and guide you to AWS Partners at re:Invent that may be the best fit for you (based upon the AWS Partner’s APN Competencies, AWS Test Drives, AWS Marketplace AMIs, and AWS Quick Start Reference Deployments). If any additional questions arise, you can simply ask one of our AWS experts onsite for assistance. The AWS Partner Solutions Explorer will be located in the Artist Foyer on Level 2, the AWS Partner Pavilion, in the Executive Summit, and in the AWS Booth. Check it out!


  • New – EC2 Spot Blocks for Defined-Duration Workloads

    by Jeff Barr | on | in Amazon EC2, EC2 Spot Instances |

    I do believe that there’s a strong evolutionary aspect to the continued development of AWS. Services start out simple and gain new features over time. Our customers start to use those features, provide us with ample feedback, and we respond by enhancing existing features and building new ones. As an example, consider the history of Amazon Elastic Compute Cloud (EC2) payment models. We started with On-Demand pricing, and then added Reserved Instances (further enhanced with three different options). We also added Spot instances and later enhanced them with the new Spot Fleet option. Here’s a simple evolutionary tree:

    Spot instances are a great fit for applications that are able to checkpoint and continue after an interruption, along with applications that might need to run for an indeterminate amount of time.  They also work really well for stateless applications such as web and application servers and can offer considerable savings over On-Demand prices.

    Some existing applications are not equipped to generate checkpoints over the course of a multi-hour run. Many applications of this type are compute-intensive and (after some initial benchmarking) run in a predictable amount of time. Applications of this type often perform batch processing, encoding, rendering, modeling, analysis, or continuous integration.

    New Spot Block Model
    In order to make EC2 an even better fit for this type of defined-duration workload, you can now launch Spot instances that will run continuously for a finite duration (1 to 6 hours). Pricing is based on the requested duration and the available capacity, and is typically 30% to 45% less than On-Demand, with an additional 5% off during non-peak hours for the region. Spot blocks and Spot instances are priced separately; you can view the current Spot pricing to learn more.

    You simply submit a Spot instance request and use the new BlockDuration parameter to specify the number of hours your want your instance(s) to run, along with the maximum price that you are willing to pay. When Spot instance capacity is available for the the requested duration, your instances will launch and run continuously for a flat hourly price. They will be terminated automatically at the end of the time block (you can also terminate them manually). This model is a good for situations where you have jobs that need to run continuously for up to 6 hours.

    Here’s how you would submit a request of this type using the AWS Command Line Interface (CLI):

    $ aws ec2 request-spot-instances \
      --block-duration-minutes 360 \
      --instance-count 2 \
      --spot-price "0.25" ...

    You can also do this by calling the RequestSpotInstances function (Console support is in the works).

    Here’s the revised evolutionary tree:

    Available Now
    You can start to make use of Spot blocks today. To learn more, read about Using Spot Blocks.

    — Jeff;

  • Coming Soon – EC2 Dedicated Hosts

    by Jeff Barr | on | in Amazon EC2, re:Invent |

    Sometimes business enables technology, and sometimes technology enables business!

    If you are migrating from an existing environment to AWS, you may have purchased volume licenses for software that is licensed for use on a server with a certain number of sockets or physical cores. Or, you may be required to run it on a specific server for a given period of time. Licenses for Windows Server, Windows SQL Server, Oracle Database, and SUSE Linux Enterprise Server often include this requirement.

    We want to make sure that you can continue to derive value from these licenses after you migrate to AWS. In general, we call this model Bring Your Own License, or BYOL. In order to do this while adhering to the terms of the license, you are going to need to control the mapping of the EC2 instances to the underlying, physical servers.

    Introducing EC2 Dedicated Hosts
    In order to give you control over this mapping, we are announcing a new model that we call Amazon EC2 Dedicated Hosts.  This model will allow you to allocate an actual physical server (the Dedicated Host) and then launch one or more EC2 instances of a given type on it. You will be able to target and reuse specific physical servers and stay within the confines of your existing software licenses.

    In addition to allowing you to Bring Your Own License to the cloud to reduce costs,  Amazon EC2 Dedicated Hosts can help you to meet stringent compliance and regulatory requirements, some of which require control and visibility over instance placement at the physical host level. In these environments, detailed auditing of changes is also a must; AWS Config will help out by recording all changes to your instances and your Amazon EC2 Dedicated Hosts.

    Using Dedicated Hosts
    You will start by allocating a Dedicated Host in a specific region and Availability Zone, and for a particular type of EC2 instance (we’ll have API, CLI, and Console support for doing this).

    Each host has room for a predefined number of instances of a particular type. For example, a specific host could have room for eight c3.xlarge instances (this is a number that I made up for this post).  After you allocate the host, you can then launch up to eight c3.xlarge instances on it.

    You will have full control over placement. You can launch instances on a specific Amazon EC2 Dedicated Host or you can have EC2 place the instances automatically onto your Amazon EC2 Dedicated Hosts. Amazon EC2 Dedicated Hosts also support affinity so that Amazon EC2 Dedicated Host instances are placed on the same host even after they are rebooted or stopped and then restarted.

    With Dedicated Hosts, the same “cloudy” benefits that you get with using EC2 instances apply but you have additional controls and visibility at your disposal to address your requirements, even as they change.

    Purchase Options
    Amazon EC2 Dedicated Hosts will be available in Reserved and On-Demand form. In either case, you pay (or consume a previously purchased Reserved Dedicated Host) when you allocate the host, regardless of whether you choose to run instances on it or not.

    You will be able to bring your existing machine images to AWS using VM Import and the AWS Management Portal for vCenter. You can also find machine images in the AWS Marketplace and launch them on Amazon EC2 Dedicated Hosts using your existing licenses and you can make use of the Amazon Linux AMI and other Linux operating systems.

    Stay Tuned
    I’ll have more to say about this feature before too long. Stay tuned to the blog for details!