Amazon Web Services Blog

  • Amazon SNS Update - Large Topics and MPNS Authenticated Mode

    19 Aug 2014 in Amazon SNS | permalink

    Amazon Simple Notification Service (SNS) is a fast and flexible push messaging service. You can easily send messages to Apple, Google, Fire OS and Windows devices, including Android devices in China (via Baidu Cloud Push).

    Today we are enhancing SNS with support for large topics (more than 10,000 subscribers) and authenticated delivery to MPNS (Microsoft Push Notification Service).

    Large Topics
    SNS offers two publish modes. First, you can push messages directly to specific mobile devices. Second, you can create an SNS topic, provide your customers with a mechanism to allow them to subscribe to the topic, and then publish messages to the topic with a single API call. This mode is great for broadcasting breaking news, announcing flash deals, and announcing in-game events or new features. You can combine customers from different platforms in the same topic and you can send a specific payload to each platform (for example, one for iOS and another for Android), again in a single call. Suppose you have created the following topic:

    With the ARN for the topic (arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns) in hand, here's how you publish a message to all of the subscribers:

    $result = $client->publish(array(
        'TopicArn' => 'arn:aws:sns:us-west-2:xxxxxxxxxxxx:amazon-sns',
        // Message is required
        'Message' => 'Hello Subscribers',
        'Subject' => 'Hello'
    ));
    

    Today we are lifting the limit of 10,000 subscriptions per SNS topic; you can now create as many as you need and no longer need to partition large subscription lists across multiple topics. This has been a frequent request from AWS customers that use SNS to build news and media sharing applications.

    There is an administrative limit of 10 million subscriptions per topic, but we'll happily raise it if you expect to have more subscribers for a single topic. Fill out the Contact Us form, select SNS, and we'll take good care of you!

    Authenticated Delivery to MPNS
    Microsoft Push Notification Service (MPNS) is the push notification relay service for Windows Phone devices prior to Windows 8.1. SNS now supports authenticated delivery to MPNS. In this mode, MPNS does not enforce any limitations on the number of notifications that can be sent to a channel in any given day (per the documentation on Windows Phone Push Mode, there's a daily limit of 500 unauthenticated push notifications per channel).

    If you require this functionality for devices that run Windows 8.1 and above, please consider using Amazon SNS for Windows Notification Service (WNS).

    -- Jeff;

  • AWS Week in Review - August 11, 2014

    18 Aug 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, August 11
    Tuesday, August 12
    Wednesday, August 13
    Thursday, August 14
    Friday, August 15

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • New Introductory AWS Videos & Labs - DynamoDB, Elastic Beanstalk, and Elastic MapReduce

    13 Aug 2014 in Training | permalink

    Earlier this year, I wrote about our Free AWS Instructional Videos and the associated AWS labs. The videos and the labs will help you to get started using an AWS Service in about 30 minutes.

    We have recently added three new videos and labs to go along with them:

    • Introduction to Amazon DynamoDB - Video / Lab
    • Introduction to AWS Elastic Beanstalk - Video / Lab
    • Introduction to Amazon Elastic MapReduce - Video / Lab.

    Each "Introduction to AWS" topic includes a short video to help you learn key concepts and terminology as well as see a step-by-step console demonstration. After watching the video, you can get hands-on practice using that AWS service with a self-paced training lab. The videos and labs are available on-demand and at no cost.

    -- Jeff;

  • Simplified PHP Development - Z-Ray in the Cloud from Zend

    12 Aug 2014 in AWS Marketplace, PHP | permalink

    PHP development on AWS just got easier today. Zend Technologies (the PHP company) is now delivering Z-Ray productivity booster for PHP developers on the AWS Marketplace.

    The success of a platform is based on many factors, developer productivity being one of the most important. Productivity leads to innovation and innovation leads to differentiation and (hopefully) success in the marketplace! Many of the companies that I talk to are trying to gain an advantage in their industry through their web and mobile applications. The Cloud gives them the ability to automate and provision environments quickly, but they are still looking for tools that will help their developers to work smarter and more efficiently.

    Z-Ray, a key feature of the new Zend Server 7 (now available in Developer Edition on the AWS Marketplace) is a tool that can help PHP developers make this happen! Z-Ray is a breakthrough technology for getting in-context feedback on the behavior of PHP code as it is being developed. It works by injecting real-time application feedback into the developer's browser. So if I'm working on a web page, Z-Ray's information is displayed at the bottom, like this (all images were supplied by Zend, in case you are wondering where the cool formatting came from):

    Every time I refresh a page, the data and statistics in Z-Ray are updated in real time. Z-Ray provides information about page requests, execution time and memory peaks, events, errors & warnings, SQL query execution, functions, and variables.

    When I click on any of the monitored features, Z-Ray provides an expanded and more detailed view. Here's some profiling and usage info for all of the functions used on the target page:

    From this view, I can then drill down into a particular function and can even debug Zend Server apps from within Z-Ray. The local and global variables are also accessible:

    Z-Ray also provides insights in to database activity:

    Zend is essentially leveraging their in-depth understanding of the PHP runtime (the Zend Engine) and delivering insights to developers while they are developing. The information provided by Z-Ray helps drive up productivity and quality and eliminates friction further down the continuous delivery pipeline.

    Many of the developers that I know find that writing a line of code, saving the changes, and refreshing the web site to view the changes is more efficient than configure debuggers and profilers or peeking in to log files to see what's going on with their code. With Z-Ray, developers get the visibility they need without having to change any of their workflows or preferred tools.

    With Z-Ray on AWS, you get the added benefit of being able to start with a very small developer image and transition to a full-fledged production environment that is fully-clustered and auto-scaling (AWS CloudFormation templates are available). In addition, developers using Zend Studio will be able to leverage Z-Ray to go directly to the page where the code to fix is located (Z-Ray will work with any of your favorite tools; you need not use Zend Studio).

    Comit Developers is a managed service provider who leveraged the AWS Marketplace to deploy Z-Ray. They provide full service web site design, marketing, and consulting services. "Getting going on Zend Server was a snap through the AWS Marketplace, and now we're in the process of moving more than 400 of our customers from a traditional hosted environment to Zend Server on AWS. One reason why we chose AWS is because the Marketplace makes it easy for us to select the software components specific to our customers’ industry and application requirements," says Bryan "BJ" Hoffpauir, Chief Architect of Comit Developers. "Onboarding our customers and managing their instances is just so much better on AWS than how we used to do it."

    If you are a PHP developer and you build applications that make use of the AWS APIs, I'd like to invite you to take Z-Ray for a spin and see what it does for your productivity. Take advantage of the 30-day trial of Zend Server 7 and give Z-Ray a test drive today!

    -- Jeff;

  • Rapidly Deploy SharePoint on AWS With New Deployment Guide and Templates

    Building on top of our earlier work to bring Microsoft SharePoint to AWS, I am happy to announce that we have published a comprehensive Quick Start Reference and a set of AWS CloudFormation templates.

    As part of today's launch, you get a reference deployment, architectural guidance, and a fully automated way to deploy a production-ready installation of SharePoint with a couple of clicks in under an hour, all in true self-service form.

    Before you run this template, you need to run our SQL Quick Start (also known as "Microsoft Windows Server Failover Clustering and SQL Server AlwaysOn Availability Groups"). It will set up Microsoft SQL Server 2012 or 2014 instances configured as a Windows Server Failover Cluster.

    The template we are announcing today runs on top of this cluster. The template deploys and configures all of the "moving parts" including the Microsoft Active Directory Domain Services infrastructure and a SharePoint farm comprised of multiple Amazon Elastic Compute Cloud (EC2) instances spread across several Availability Zones within a Amazon Virtual Private Cloud.

    Reference Deployment Architecture
    The Reference Deployment document will walk you through all of the steps necessary to end up with a highly available SharePoint Server 2013 environment! If you use the default parameters, you will end up with the following environment, all running in the AWS Cloud.

    The reference deployment incorporates the best practices for SharePoint deployment and AWS security. It contains the following AWS components:

    • An Amazon Virtual Private Cloud spanning multiple Availability Zones, containing a pair of private subnets and a DMZ on a pair of public subnets.
    • An Internet Gateway to allow external connections to the public subnets.
    • EC2 instances in the DMZ with support for RDP to allow for remote administration.
    • An Elastic Load Balancer to route traffic to the EC2 instances running the SharePoint front-end.
    • Additional EC2 instances to run the SharePoint back-end.
    • Additional EC2 instances to run the Active Directory Domain Controller.
    • Preconfigured VPC security groups and Network ACLs

    The document walks you through each component of the architecture and explains what it does and how it works. It also details an optional "Streamlined" deployment topology that can be appropriate for certain use cases along an "Office Web Apps" model that supports browser-based editing of Office documents that are stored in SharePoint libraries. There's even an option to create an Intranet deployment that does not include an Internet-facing element.

    The entire setup process is automated and needs almost no manual intervention. You will need to download SharePoint from a source that depends on your current licensing agreement with Microsoft. By default, the installation uses a trial key for deployment. In order to deploy a licensed version of SharePoint Server, you can use License Mobility Through Software Assurance.

    CloudFormation Template
    The CloudFormation template will prompt you for all of the information needed to start the setup process:

    The template is fairly complex (over 4600 lines of JSON) and is a good place to start when you are looking for examples of best practices for the use of CloudFormation to automate the instantiation of complex systems.

    -- Jeff;

  • Multi-Factor Authentication for Amazon WorkSpaces

    Amazon WorkSpaces is a fully managed desktop computing service in the cloud. You can easily provision and manage cloud-based desktops that can be accessed from laptops, iPads, Kindle Fire, and Android tablets.

    Today we are enhancing WorkSpaces with support for multi-factor authentication using an on-premises RADIUS server. In plain English, your WorkSpaces users will now be able to authenticate themselves using the same mechanism that they already use for other forms of remote access to your organization's resources.

    Once this new feature has been enabled and configured, WorkSpaces users will log in by entering their Active Directory user name and password followed by an OTP (One-Time Passcode) supplied by a hardware or a software token.

    Important Details
    This feature should work with any security provider that supports RADIUS authentication (we have verified our implementation against the Symantec VIP and Microsoft Radius Server products). We currently support the PAP, CHAP, MS-CHAP1, and MS-CHAP2 protocols, along with RADIUS proxies.

    As a WorkSpaces administrator, you can configure this feature for your users by entering the connection information (IP addresses, shared secret, protocol, timeout, and retry count) for your RADIUS server fleet in the Directories section of the WorkSpaces console. You can provision multiple RADIUS servers to increase availability if you'd like. In this case you can enter the IP addresses of all of the servers or you can enter the same information for a load balancer in front of the fleet.

    On the Roadmap
    As is the case with every part of AWS, we plan to enhance this feature over time. Although I'll stick to our usual policy of not spilling any beans before their time, I can say that we expect to add support for additional authentication options such as smart cards and certificates. We are always interested in your feature requests; please feel free to post a note to the Amazon WorkSpaces Forum to make sure that we see them. You can also consult the Amazon WorkSpaces documentation for more information about Amazon WorkSpaces and this new feature.

    Price & Availability
    This feature is available now at no extra charge to Amazon WorkSpaces and you can start using it today.

    -- Jeff;

    PS - Last month we made a couple of enhancements to WorkSpaces that will improve integration with your on-premises Active Directory. You can now search for and select the desired Organizational Unit (OU) from your Active Directory. You can now use separate domains for your users and your resources; this improves both security and manageability. You can also add a security group that is effective within the VPC associated with your WorkSpaces desktops; this allows you to control network access from WorkSpaces to other resources in your VPC and on-premises network. To learn more, read this forum post.

  • Tag Your Elastic Load Balancers

    Elastic Load Balancing helps you to build applications that are resilient and easy to scale. You can create both public-facing and internal load balancers in the AWS Management Console with a couple of clicks.

    Today we are launching a helpful new feature for Elastic Load Balancing. You can now add up to ten tags (name/value pairs) to each of your load balancers. You can add tags to new load balancers when you create them. You can also add, remove, and change tags on existing load balancers. Tag names can consist of up to 128 Unicode characters; values can have up to 256.

    Tags can be used for a number of different purposes including tracking of identity, role or owner. Tagging items also allows them to be grouped and segregated for billing and cost tracking. Once you tag your load balancers, you can visualize your spending patterns and analyze costs by tags using the Cost Explorer in the AWS Management Console.

    You can manage tags from the AWS Management Console, Elastic Load Balancing API , or the AWS Command Line Interface (CLI). Here's how you add tags from the Console when you create a new Elastic Load Balancer:

    You can see the tags on each of your Elastic Load Balancers at a glance:

    You can edit (add, remove, and change) tags just as easily:

    This new feature is available now and you can start using it today. To learn more, read about ELB Tagging in the Elastic Load Balancing Developer Guide.

    -- Jeff;

  • AWS Week in Review - August 4, 2014

    11 Aug 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, August 4
    Tuesday, August 5
    Wednesday, August 6
    Thursday, August 7
    Friday, August 8

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • AWS SDK for Python (Boto) Now Supports Python 3

    07 Aug 2014 in Developers | permalink

    The AWS SDK for Python (also known as Boto) has been updated and is now compatible with Python 3. You can now build AWS applications using versions 2.6, 2.7, 3.3, and 3.4 of Python. Here's a screen shot of some Boto code running on Python 3.4.1:

    (py3)$ python
    Python 3.4.1 (default, May 19 2014, 13:10:29)
    [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-583.0.40)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import boto
    >>> s3 = boto.connect_s3()
    >>> bucket = s3.create_bucket('boto-py3-test')
    >>> from boto.s3.key import Key
    >>> item = Key(bucket)
    >>> item.key = 'hello.txt'
    >>> item.set_contents_from_string('Boto and Python 3 rock!')
    23
    >>> item = bucket.get_key('hello.txt')
    >>> item.get_contents_as_string().decode('utf-8')
    'Boto and Python 3 rock!'
    >>> 
    

    We would not have been able to make this happen without the open source contributions from the amazing Boto community. You can download the latest version of Boto from PyPi or GitHub and start using AWS for your Python 3 projects today! For more information about supported services and specific, service-by-service compatibility, read the Boto Documentation.

    -- Jeff;

  • All Data Are Belong to AWS: Streaming upload via Fluentd

    06 Aug 2014 in Guest Post, Kinesis | permalink

    I've got a special treat for you today! Kiyoto Tamura of Treasure Data wrote a really interesting guest post to introduce you to Fluentd and to show you how you can use it with a wide variety of AWS services to collect, store, and process data.

    -- Jeff;


    Data storage is Cheap. Data collection is Not!
    Data storage has become incredibly cheap. When I say cheap, I do not mean in terms of hardware but operational, labor cost. Thanks to the advent of IaaS like AWS, many of us no longer spend days and weeks on capacity planning (or better yet, can provision resources in an auto-scalable manner) or worry about our server racks catching fire.

    Cheaper storage means that our ideas are no longer bound by how much data we can store. A handful of engineers can run a dozen or so Redshift instances or manage hundreds of billions of log data backed up in Amazon Simple Storage Service (S3) to power their daily EMR batch jobs. Analyzing massive datasets is no longer a privilege exclusive to big, tech-savvy companies.

    However, data collection is still a major challenge: data does not magically originate inside storage systems or organize themselves; hence, many (ad hoc) scripts are written to parse and load data. These scripts are brittle, error-prone and near-impossible to extend.

    This is the problem Fluentd tries to solve: scalable, flexible data collection in real-time. In the rest of this blog post, I will walk through the basic architecture of Fluentd and share some use cases on AWS.

    Fluentd: Open Source Data Collector for High-volume Data Streams
    Fluentd is an open source data collector originally written at Treasure Data. Open-sourced in October 2011, it has gained traction steadily over the last 2.5 years: today, we have a thriving community of ~50 contributors and 2,100+ Stargazers on GitHub with companies like Slideshare and Nintendo deploying it in production.

    Inputs and Outputs
    At the highest level, Fluentd consists of inputs and outputs. Inputs specify how and where Fluentd ingests data.

    Common inputs are:

    1. Tailing log files and parsing each line (or multiple lines at a time).
    2. Listening to syslog messages.
    3. Accepting HTTP requests and parsing the message body.

    There are two key features about inputs: JSON and tagging.

    1. Fluentd embraces JSON as its core data format, and each input is responsible for turning incoming data into a series of JSON "events."
    2. Each input gives a tag to the data it ingests. Based on the tag, Fluentd decides what to do with data coming from different inputs (see below).

    Once data flow into Fluentd via inputs, Fluentd looks at each event's tag (as explained in 2 above) and routes it to output targets such as a local filesystem, RDBMSs, NoSQL databases and AWS services.

    Open and Pluggable Architecture
    How does Fluentd have so many inputs and outputs already? The secret is its open, pluggable architecture. With a minimal knowledge of Ruby, one can build a new plugin in a matter of few hours. Unsurprisingly, many Fluentd users are also AWS enthusiasts, so we already have plugins for the following AWS services:

    1. Amazon Simple Storage Service (S3) (output)
    2. Amazon Redshift (output)
    3. Amazon Simple Queue Service (SQS) (input and output)
    4. Amazon Kinesis (output)
    5. Amazon DynamodB (output)
    6. AWS CloudWatch (input)

    Performance and Reliability
    Whenever I "confess" that Fluentd is mostly written in Ruby, people express concerns about performance. Fear not. Fluentd is plenty fast. On a modern server, it can process ~15,000 events/sec on a single core, and you can get better throughput by running Fluentd on multiple cores.

    Fluentd gets its speed from using lower-level libraries written in C for performance-critical parts of the software: For example, Fluentd uses Cool.io (maintained by Masahiro Nakagawa, the main maintainer of Fluentd) for event loop and MessagePack for Ruby (maintained by Sadayuki Furuhashi, the original author of Fluentd) for internal data format.

    Speed is nice, but reliability is a must for log collection: data loss leads to bad data and worse decisions. Fluentd ensures reliability through buffering. Output plugins can be configured to buffer its data either in-memory or on-disk so that if data transfer fails, it can be retried without data loss. The buffering logic is highly tunable and can be customized for various throughput/latency requirements.

    Example: Archiving Apache Logs into S3
    Now that I've given an overview of Fluentd's features, let's dive into an example. We will show you how to set up Fluentd to archive Apache web server logs into S3.

    Step 1: Getting Fluentd
    Fluentd is available as a Ruby gem (gem install fluentd). Also, Treasure Data packages it with all the dependencies as td-agent. Here, we proceed with td-agent. I assume that you are on Ubuntu Precise (12.04), but td-agent is also available for Ubuntu Lucid and CentOS 5/6 with the support for Ubuntu Trusty forthcoming.

    Run the following command:

    curl -L http://toolbelt.treasuredata.com/sh/install-ubuntu-precise.sh | sh
    

    You can check that td-agent is successfully installed by running the following command:

    $ which td-agent
    
    /usr/sbin/td-agent
    

    Step 2: Configuring Input and Output
    For td-agent, the configuration file is located at /etc/td-agent/td-agent.conf. Let's reconfigure it so that it tails the Apache log file.

    <source>
      type tail
      format apache2
      path /var/log/apache2/access_log
      pos_file /var/log/td-agent/apache2.access_log.pos
      tag s3.apache.access
    </source>
    

    This snippet configures the Apache log file input. It tells Fluentd to tail the log file located at /var/log/apache2/access_log, parse it according to the Apache combined log format and tag it as s3.apache.access.

    Next, we configure the S3 output as follows:

    <match s3.*.*>
      type s3
    
      s3_bucket YOUR_BUCKET_NAME
      path logs/
      buffer_path /var/log/td-agent/s3
    
      time_slice_format %Y%m%d%H
      time_slice_wait 10m
      utc
      
      format_json true
      include_time_key true
      include_tag_key true
    
      buffer_chunk_limit 256m
    </match>
    

    The <match s3.*.*> tells Fluentd to match any event whose tag has 1) three parts and 2) starts with s3. Since all the events coming from the Apache access log have the tag s3.apache.access, it gets matched here and sent to S3.

    Finally, let's start td-agent with the updated configuration:

    $ sudo service td-agent start
    * Starting td-agent td-agent          [OK]
    

    It might take about 10 minutes for your data to appear in S3 due to buffering (see "time_slice_wait"), but eventually logs should appear in YOUR_BUCKET_NAME/logs/yyyyMMddHH. Also, make sure that Fluentd has write access to your S3 bucket. The following setting should be used for IAM roles:

    {
          "Effect": "Allow",
          "Action": [
            "s3:Get*", "s3:List*","s3:Put*", "s3:Post*"
          ],
          "Resource": [
            "arn:aws:s3:::YOUR_BUCKET_NAME/logs/*", "arn:aws:s3::: YOUR_BUCKET_NAME"
          ]
    }
    

    What's Next?
    The above overview and example give you only a glimpse of what can be done with Fluentd. You can learn more about Fluentd on its website and documentation and contribute to the project on its GitHub repository. If you have any questions, tweet to us on Twitter or ask us questions on the mailing list!

    -- Kiyoto

    Read more »