Amazon Web Services Blog

  • Amazon AppStream Now Supports Chrome Browser and Chromebooks

    16 Sep 2014 in Amazon AppStream | permalink

    As you might know from reading my earlier posts (Amazon AppStream - Deliver Streaming Applications from the Cloud and Amazon AppStream - Now Available to All Developers), Amazon AppStream gives you the power to build complex applications that run from simple devices, unconstrained by the compute power, storage, or graphical rendering capabilities of the device. As an example of what AppStream can do, read about the Eve Online Character Creator (pictured at right).

    Today we are extending AppStream with support for desktop Chrome browsers (Windows and Mac OS X) and Chromebooks. Developers of CAD, 3D modeling, medical imaging, and other types of applications can now build compelling, graphically-intense applications that run on an even wider variety of desktops (Linux, Mac OS X, and Microsoft Windows) and mobile devices ( Fire OS, Chromebooks, Android, and iOS). Even better, AppStream's cloud-based application hosting model obviates the need for large downloads, complex installation processes and sophisticated graphical hardware on the client side. Developers can take advantage of GPU-powered rendering in the cloud and use other AWS services to host their application's backend in a cost-effective yet fully scalable fashion.

    Getting Started With Appstream
    The AppStream Chrome SDK (available via the AppStream Downloads page) contains the documentation and tools that you need to have in order to build AppStream-compatible applications. It also includes the AppStream Chrome Application. You can use it as-is to view and interact with AppStream streaming applications, or you can customize it (using HTML, JavaScript, and CSS) with custom launch parameters.

    The AppStream Chrome Application runs on Chrome OS version 37 and higher, on Chrome desktop browsers for Windows, Mac OS X, and Linux, and on Chromebooks. Chrome mobile and other HTML 5 web browsers are not currently supported. The application is available in the Chrome Web Store (visit Appstream Chrome App) and can be launched via chrome://apps.

    The AppStream SDK is available at no charge. As detailed in the AppStream Pricing page, you also have access to up to 20 hours of streaming per month for 12 months as part of the AWS Free Tier. You will also have to register for a Chrome Developer Account at a cost of $5 (paid to Google, not to AWS).

    -- Jeff;

  • AWS Week in Review - September 6, 2014

    15 Sep 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, September 8
    Tuesday, September 9
    Wednesday, September 10
    Thursday, September 11
    Friday, September 12

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Now Hiring: Product Marketing Managers for the AWS Team

    11 Sep 2014 in Help Wanted | permalink

    Are you interested in a job that lets you combine your technical skills with your marketing savvy and your desire to communicate? If so, the Product Marketing Manager position may be a great fit for you. You'll get to work directly with the teams behind the full range of AWS services, working with them to create high quality marketing deliverables that accurately describe the value propositions for their offerings.

    To learn more about this position, take a look at our new video, Product Marketing Opportunities at Amazon Web Services:

    In the video, several of my AWS colleagues talk about the Product Marketing Manager role -- what they do and how it benefits our customers. You'll get a peek behind the scenes (and into the hallways) and see what it is like to work on the AWS Team.

    We are hiring for multiple positions within this job category. While the specifics will vary from role to role, the job description for Product Marketing Manager - Amazon EC2 should give you a pretty good idea of the responsibilities that you would have and qualifications that we are looking for. To apply, simply email your resume to pmm@amazon.com.

    If this is not quite the role for you, don't give up yet! We are doing a lot of hiring right now; check out the AWS Careers page for a full list of open positions. Perhaps one of them is right for you!

    -- Jeff;

  • Search and Interact With Your Streaming Data Using the Kinesis Connector to Elasticsearch

    11 Sep 2014 in Amazon Kinesis | permalink

    My colleague Rahul Patil wrote a guest post to show you how to build an application that loads streaming data from Kinesis into an Elasticsearch cluster in real-time.

    -- Jeff;


    The Amazon Kinesis team is excited to release the Kinesis connector to Elasticsearch! Using the connector, developers can easily write an application that loads streaming data from Kinesis into an Elasticsearch cluster in real-time and reliably at scale.

    Elasticsearch is an open-source search and analytics engine. It indexes structured and unstructured data in real-time. Kibana is Elasticsearch's data visualization engine; it is used by dev-ops and business analysts to setup interactive dashboards. Data in an Elasticsearch cluster can also be accessed programmatically using RESTful API or application SDKs. You can use the CloudFormation template in our sample to quickly create an Elasticsearch cluster on Amazon Elastic Compute Cloud (EC2), fully managed by Auto Scaling.

    Wiring Kinesis, Elasticsearch, and Kibana
    Here's a block diagram to help you see how the pieces fit together:

    Using the new Kinesis Connector to Elasticsearch, you author an application to consume data from Kinesis Stream and index the data into an Elasticsearch cluster. You can transform, filter, and buffer records before emitting them to Elasticsearch. You can also finely tune Elasticsearch specific indexing operations to add fields like time to live, version number, type, and id on a per record basis. The flow of records is as illustrated in the diagram below.

    Note that you can also run the entire connector pipeline from within your Elasticsearch cluster using River.

    Getting Started
    Your code has the following duties:

    1. Set application specific configurations.
    2. Create and configure a KinesisConnectorPipeline with a Transformer, a Filter, a Buffer, and an Emitter.
    3. Create a KinesisConnectorExecutor that runs the pipeline continuously.
    All the above components come with a default implementation, which can easily be replaced with your custom logic.

    Configure the Connector Properties
    The sample comes with a .properties file and a configurator. There are many settings and you can leave most of them set to their default values. For example, the following settings will:

    1. Configure the connector to bulk load data into Elasticsearch only after you've collect at least 1000 records.
    2. Use the local Elasticsearch cluster endpoint for testing.

    bufferRecordCountLimit = 1000
    elasticSearchEndpoint = localhost
    

    Implementing Pipeline Components
    In order to wire the Transformer, Filter, Buffer, and Emitter, your code must implement the IKinesisConnectorPipeline interface.

    public class ElasticSearchPipeline implements
        IKinesisConnectorPipeline<String,ElasticSearchObject> 
    
    public IEmitter<ElasticSearchObject> getEmitter
        (KinesisConnectorConfiguration configuration) {
        return new ElasticSearchEmitter(configuration);
    }
    
    public IBuffer<String> getBuffer(
        KinesisConnectorConfiguration configuration) {
        return new BasicMemoryBuffer<String>(configuration);
    }
    
    public ITransformerBase <String, ElasticSearchObject> getTransformer 
        (KinesisConnectorConfiguration configuration) {
        return new StringToElasticSearchTransformer();
    }
    
    public IFilter<String> getFilter
        (KinesisConnectorConfiguration configuration) {
        return new AllPassFilter<String>();
    }
    

    The following snippet implements the abstract factory method, indicating the pipeline you wish to use:

    public KinesisConnectorRecordProcessorFactory<String,ElasticSearchObject> 
        getKinesisConnectorRecordProcessorFactory() {
             return new KinesisConnectorRecordProcessorFactory<String, 
                 ElasticSearchObject>(new ElasticSearchPipeline(), config);
        }
    

    Defining an Executor
    The following snippet defines a pipeline where the incoming Kinesis records are strings and outgoing records are an ElasticSearchObject:

    public class ElasticSearchExecutor extends 
        KinesisConnectorExecutor<String,ElasticSearchObject>
    

    The following snippet implements the main method, creates the Executor and starts running it:

    public static void main(String[] args) {
        KinesisConnectorExecutor<String, ElasticSearchObject> executor 
            = new ElasticSearchExecutor(configFile);
        executor.run();
    }
    

    From here, make sure your AWS Credentials are provided correctly. Setup the project dependencies using ant setup. To run the app, use ant run and watch it go! All of the code is on GitHub, so you can get started immediately. Please post your questions and suggestions on the Kinesis Forum.

    Kinesis Client Library and Kinesis Connector Library
    When we launched Kinesis in November of 2013, we also introduced the Kinesis Client Library. You can use the client library to build applications that process streaming data. It will handle complex issues such as load-balancing of streaming data, coordination of distributed services, while adapting to changes in stream volume, all in a fault-tolerant manner.

    We know that many developers want to consume and process incoming streams using a variety of other AWS and non-AWS services. In order to meet this need, we released the Kinesis Connector Library late last year with support for Amazon DynamodB, Amazon Redshift, and Amazon Simple Storage Service (S3). We then followed up that with a Kinesis Storm Spout and Amazon EMR connector earlier this year. Today we are expanding the Kinesis Connector Library with support for Elasticsearch.

    -- Rahul

  • ElastiCache T2 Support

    11 Sep 2014 in Amazon ElastiCache | permalink

    As you may already know, Amazon Elastic Compute Cloud (EC2)'s new T2 instance type provides a solid level of baseline performance and the ability to burst above the baseline as needed. As I wrote in my blog post, these instances are ideal for development, testing, and medium-traffic web sites.

    Today we are bringing the benefits of the T2 instance type to Amazon ElastiCache. The cache.t2.micro (555 megabytes of RAM), cache.t2.small (1.55 gigabytes of RAM), and cache.t2.medium (3.22 gigabytes of RAM) cache nodes feature the latest Intel Xeon processors running at up to 3.3 GHz. You can launch new cache nodes using the Memcached or Redis engines.

    T2 instances are supported only within an Amazon Virtual Private Cloud. The Redis Backup and Restore feature and the Redis AOF are not currently usable with the T2 instances. You can launch them in the usual ways (command line, API, CloudFormation, or Console):

    Pricing and Availability
    Pricing for T2 cache nodes starts at $0.008 per hour for Three Year Heavy Utilization Reserved Cache Nodes and $0.017 per hour for On-Demand Cache Nodes. (see the ElastiCache Pricing page for more information). As part of the AWS Free Tier, eligible AWS users have access to a cache.t2.micro instance for 750 hours per month at no charge.

    The new cache nodes are available today in all AWS Regions except AWS GovCloud (US) and you can start using them today!

    -- Jeff;

  • Kick-Start Your Cloud Storage Project With the Riverbed SteelStore Gateway

    09 Sep 2014 in Amazon S3 | permalink

    Many AWS customers begin their journey to the cloud by implementing a backup and recovery discipline. Because the cloud can provide any desired amount of durable storage that is both secured and cost-effective, organizations of all shapes and sizes are using it to support robust backup and recovery models that eliminate the need for on-premises infrastructure.

    Our friends at Riverbed have launched an exclusive promotion for AWS customers. This promotion is designed to help qualified enterprise, mid-market, and SMB customers in North America to kick-start their cloud-storage projects by applying for up to 8 TB of free Amazon Simple Storage Service (S3) usage for six months.

    If you qualify for the promotion, you will be invited to download the Riverbed SteelStore™ software appliance (you will also receive enough AWS credits to allow you to store 8 TB of data per month for six months). With advanced compression, deduplication, network acceleration and encryption features, SteelStore will provide you with enterprise-class levels of performance, availability, data security, and data durability. All data is encrypted using AES-256 before leaving your premises; this gives you protection in transit and at rest. SteelStore intelligently caches up to 2 TB of recent backups locally for rapid restoration.

    The SteelStore appliance is easy to implement! You can be up and running in a matter of minutes with the implementation guide, getting started guide, and user guide that you will receive as part of your download. The appliance is compatible with over 85% of the backup products on the market, including solutions from CA, CommVault, Dell, EMC, HP, IBM, Symantec, and Veeam.

    To learn more or to apply for this exclusive promotion, click here!

    -- Jeff;

  • Use AWS OpsWorks & Ruby to Build and Scale Simple Workflow Applications

    From time to time, one of my blog posts will describe a way to make use of two AWS products or services together. Today I am going to go one better and show you how to bring the following trio of items in to play simultaneously:

    All Together Now
    With today's launch, it is now even easier for you to build, host, and scale SWF applications in Ruby. A new, dedicated layer in OpsWorks simplifies the deployment of workflows and activities written in the AWS Flow Framework for Ruby. By combining AWS OpsWorks and SWF, you can easily set up a worker fleet that runs in the cloud, scales automatically, and makes use of advanced Amazon Elastic Compute Cloud (EC2) features.

    This new layer is accessible from the AWS Management Console. As part of this launch, we are also releasing a new command-line utility called the runner. You can use this utility to test your workflow locally before pushing it to the cloud. The runner uses information provided in a new, JSON-based configuration file to register workflow and activity types, and start the workers.

    Console Support
    A Ruby Flow layer can be added to any OpsWorks stack that is running version 11.10 (or newer) of Chef. Simple add a new layer by choosing AWS Flow (Ruby) from the menu:

    You can customize the layer if necessary (the defaults will work fine for most applications):

    The layer will be created immediately and will include four Chef recipes that are specific to Ruby Flow (the recipes are available on GitHub):

    The Runner
    As part of today's release we are including a new command-line utility, aws-flow-ruby, also known as the runner. This utility is used by AWS OpsWorks to run your workflow code. You can also use it to test your SWF applications locally before you push them to the cloud.

    The runner is configured using a JSON file that looks like this:

    {
      "domains": [{
          "name": "BookingSample",
      }],
    
      "workflow_workers": [{
         "task_list": "workflow_tasklist"
      }],
     
      "activity_workers": [{
        "task_list": "activity_tasklist"
      }]
    }
    

    Go With the Flow
    The new Ruby Flow layer type is available now and you can start using it today. To learn more about it, take a look at the new OpsWorks section of the AWS Flow Framework for Ruby User Guide.

    -- Jeff;

  • AWS Week in Review - September 1, 2014

    08 Sep 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, September 1
    • We celebrated Labor Day in the US, and launched nothing!
    Tuesday, September 2
    Wednesday,September 3
    Thursday, September 4
    Friday, September 5

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Five More EC2 Instance Types for AWS GovCloud (US)

    AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud. Today we are enhancing this Region with the addition of five more EC2 instance types. Instances of these types can be launched directly or through Auto Scaling groups.

    Let's take a look at the newly available instance types and review the use cases for each one.

    HS1 - High Storage Density & Sequential I/O
    EC2's HS1 instances provide very high storage density and high sequential read and write performance per instance. They also offer higher storage density than other EC2 instances along with the lowest cost per GB of storage. These instances are ideal for data warehousing, Hadoop/MapReduce applications, and parallel file systems. To learn more about this instance type, read my blog post, The New EC2 High Storage Instance Family.

    C3 - High Compute Capacity
    The C3 instances are ideal for applications that benefit from a higher amount of compute capacity relative to memory (in comparison to the General Purpose instances), and are recommended for high performance web servers, and other scale-out compute-intensive applications. To learn more about the C3 instances, read A New Generation of EC2 Instances for Compute-Intensive Workloads.

    R3 - Memory Optimized
    R3 instances are the latest generation of memory-optimized instances. We recommend them for high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, larger deployments of Microsoft SharePoint and other enterprise applications. The R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. My recent post, Now Available - New Memory-Optimized EC2 Instances contains more information.

    I2 - High Storage & Random I/O
    EC2's I2 instances are high storage instances that provide very fast SSD-backed instance storage optimized for very high random I/O performance, and provide high IOPS at a low cost. You can use I2 instances for transactional systems and high performance NoSQL databases like Cassandra and MongoDB. Like the R3 instances, the I2 instances currently support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only. I described these instances in considerable detail last year in Amazon EC2's New I2 Instance Type - Available Now!.

    T2 - Economical Base + Full-Core Burst
    Finally, the T2 instances are built around a processing allocation model that provides you a generous, assured baseline amount of processing power coupled with the ability to automatically and transparently scale up to a full core when you need more compute power. Your ability to burst is based on the concept of "CPU Credits" that you accumulate during quiet periods and spend when things get busy. You can provision an instance of modest size and cost and still have more than adequate compute power in reserve to handle peak demands for compute power. To learn more about these instances, read my recent blog post, New Low Cost EC2 Instances with Burstable Performance.

    Available Now
    These instance types are available now to everyone who uses AWS GovCloud (US). Visit the AWS GovCloud (US) EC2 Pricing Page to learn more.

    -- Jeff;

  • Launch Your Startup at AWS re:Invent

    04 Sep 2014 in AWS re:Invent, Startups | permalink

    Sitting here at my desk in Seattle, I am surrounded by colleagues that are working non-stop to make this year's AWS re:Invent conference the best one yet! I get to hear all about the keynotes, sessions, and the big party without even leaving my desk.

    In 2013, five exciting startups had the opportunity to launch at re:Invent in a session emceed by Amazon CTO Werner Vogels. Representatives from Koality, CardFlight, Runscope, SportXast, and Nitrous.IO each presented for five minutes and fielded Werner's questions for another minute.

    The tradition will continue in 2014. If your AWS-powered startup is currently in stealth mode or if you are already out-and-about and are ready to announce a major feature on stage with Werner, I would like to invite you to apply to do so at re:Invent.

    For consideration, please email the following information to awsstartuplaunch@amazon.com (keep your response to 500 words or less):

        Company Overview - Tell us your company name, location, website URL, and give us some information about your core product or service.
        Launch Details - Tell us what you plan to launch or announce.
        Target Audience - Describe the target market and audience for your product or service (businesses, consumers, teachers, students, etc).
        AWS Usage - List the AWS services that you use.
        Team Background - Include some background information on you and on your team.
        Exclusive Offer - Tell us about an exclusive offer that you can make to re:Invent attendees.

    You'll need to have the information to us before 5:00 PM PT on September 30th. You've got a month and we can't wait to hear from you!

    -- Jeff;