Amazon Web Services Blog

  • Big Data Update - New Blog and New Web-Based Training

    22 Jul 2014 in Big Data, Training | permalink

    The topic of big data comes up almost every time I meet with current or potential AWS customers. They want to store, process, and extract meaning from data sets that seemingly grow in size with every meeting.

    In order to help our customers to understand the full spectrum of AWS resources that are available for use on their big data problems, we are introducing two new resources -- a new AWS Big Data Blog and web-based training on Big Data Technology Fundamentals.

    AWS Big Data Blog
    The AWS Big Data Blog is a way for data scientists and developers to learn big data best practices, discover which managed AWS Big Data services are the best fit for their use case, and help them get started on AWS big data services. Our goal is to make this the hub for developers to discover new ways to collect, store, clean, process, and visualize data at any scale.

    Readers will find short tutorials with code samples, case studies that demonstrate the unique benefits of doing big data on AWS, new feature announcements, partner- and customer-generated demos and tutorials, and tips and best practices for using AWS big data services.

    The first two posts on the blog show you how to Build a Recommender with Apache Mahout on Amazon Elastic MapReduce and how to Power Gaming Applications with Amazon DynamoDB.

    Big Data Training
    If you are looking for a structured way to learn more about the tools, techniques, and options available to you as you learn more about big data, our new web-based Big Data Technology Fundamentals course should be of interest to you.

    You should plan to spend about three hours going through this course. You will first learn how to identify common tools and technologies that can be used to create big data solutions. Then you will gain an understanding of the MapReduce framework, including the map, shuffle and sort, and reduce components. Finally, you will learn how to use the Pig and Hive programming frameworks to analyze and query large amounts of data.

    You will need a working knowledge of programming in Java, C#, or a similar language in order to fully benefit from this training course.

    The web-based course is offered at no charge, and can be used on its own or to prepare for our instructor-led Big Data on AWS course.

    -- Jeff;

  • AWS Week in Review - July 14, 2014

    21 Jul 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, July 14
    Tuesday July 15
    Wednesday, July 16
    Thursday, July 17
    Friday, July 18

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • AWS Support API Update - Attachment Creation and Lightweight Monitoring

    16 Jul 2014 in AWS Support | permalink

    The AWS Support API provides you with a programmatic access to your support cases and to the AWS Trusted Advisor . Today we are extending the API in order to give you more control over the cases that you create and a new, lightweight way to access information about your cases. The examples in this post make use of the AWS SDK for Java .

    Creating Attachments for Support Cases
    When you create a support case, you may want to include additional information along with the case. Perhaps you want to attach some sample code, a protocol trace, or some screen shots. With today's release you can now create, manage, and use attachments programmatically.

    My colleague Kapil Sharma provided me with some sample Java code to show you how to do this. Let's walk through it. The first step is to create an Attachment from a file (File1 in this case):

    Attachment attachment = new Attachment;
    Attachment.setData(ByteBuffer.wrap(File.readAllBytes(FileSystems.getDefault().getPath("", "File1"))));
    attachment.setFileName("Attachment.txt");
    

    Then you create a List of the attachments for the case:

    List<Attachment> attachmentSet = New ArrayList<Attachment>();
    attachmentSet.add(attachment);
    

    And upload the attachments:

    AddAttachmentsToSetRequest addAttachmentsToSetRequest = new AddAttachmentsToSetRequest();
    addAttachmentsToSetRequest.setAttachments(attachmentSet);
    AddAttachmentsToSetResult addAttachmentsToSetResult = client.addAttachmentsToSet(addAttachmentsToSetRequest);
    

    With the attachment or attachments uploaded, you next need to get an Id for the set:

    String attachmentSetId = addAttachmentsToSetResult.getAttachmentSetId();
    

    And then you are ready to create the actual support case:

    CreateCaseRequest request = new CreateCaseRequest()
        .withAttachmentSetId(attachmentSetId)
        .withServiceCode(serviceCode)
        .withCategoryCode(categoryCode)
        .withLanguage(language)
        .withCcEmailAddresses(ccEmailAddress)
        .withCommunicationBody(communicationBody)
        .withSubject(caseSubject)
        .withSeverityCode(severityCode);
    
    CreateCaseResult result = client.createCase(request);
    


    Once you have created a support case or two, you probably want to check on their status. The describeCases function lets you do just that. In the past, this function returned a detailed response that included up to 15 MB of attachments. With today's enhancement, you can now ask for a lightweight response that does not include any attachments. If you are calling describeCases to check for changes in status, you can now do this in a more efficient fashion.

    DescribeCaseRequest request = new DescribeCaseRequest();
    
    Request.withcaseIdList(caseId);
    Request.withIncludeCommunications(false);
    client.describeCases(request)
    

    To learn more about creating and managing cases programmatically, take a look at Programming the Life of an AWS Support Case.

    Available Now
    The new functionality described in this post is available now and you can start using them today! The SDK for PHP, SDK for .NET, SDK for Ruby, SDK for Java, SDK for JavaScript in the Browser, and the AWS Command Line Interface have been updated.

    -- Jeff;

  • Amazon RDS PostgreSQL Update - Service Level Agreement and General Availability

    We launched RDS PostgreSQL at AWS re:Invent in November of 2013 in order to bring the benefits of a managed database service to the PostgreSQL community.

    Customers of all sizes are bringing their mission-critical PostgreSQL workloads to RDS at a rapid clip. Here is a small sample of the applications that have been launched on top of RDS PostgreSQL in the past 7 months:

    Our RDS partners ( ESRI, Boundless, and Jaspersoft to name a few) are helping their customers to take advantage of the power of RDS PostgreSQL.

    SLA and General Availability
    We have experienced strong customer adoption and have accumulated plenty of operational experience since the beta launch and are happy to announce that effective July 1, 2014 RDS PostgreSQL will be included in the Amazon RDS SLA and that it will be generally available.

    The SLA provides for 99.95% availability for Multi-AZ database instances on a monthly basis. If availability falls below 99.95% for a Multi-AZ database instance (which is a maximum of 22 minutes of unavailability for each database instance per month), you will be eligible to receive service credits. The Amazon RDS SLA is designed to give you additional confidence to run the most demanding and mission critical workloads dependably in the AWS cloud.

    Let's take a look at some of the ways that our customers are making use of RDS for PostgreSQL!

    6Wunderkinder - Macstore App of the Year on RDS PostgreSQL
    6Wunderkinder is the creator of Wunderlist, a cross-platform task management application and the winner of the Macstore app of the year for 2013. We spoke to Chad Fowler (CTO) to learn more about their use of RDS. This is what he had to say:

    We have millions of connected and constantly polling clients. This generates massive amounts of usage data in our operational data stores in PostgreSQL. We were able to handoff the database management aspects to RDS and leveraged the beefiest 244GB instance to handle our production workload.

    With Provisioned IOPS we were able to achieve the I/O throughput demanded by our application. Enabling Multi-AZ has given us the peace of mind to rely on RDS for high-availability and focus on what we do best - build a robust task management application.

    Netflix - Open Source Security on RDS PostgreSQL
    Online content provider Netflix is able to support seamless global service by partnering with AWS for services and delivery of content. Cloud Security Architect Jason Chan provided some perspective on their use of RDS for PostgreSQL:

    We recently announced the open source availability of Security Monkey, our solution for monitoring and analyzing the security of our Amazon Web Services configurations. We leveraged RDS PostgreSQL to capture the security data gathered by our solution.

    Building an application backed by production ready PostgreSQL database with Multi-AZ high availability, automated backups, patching and upgrades handled by RDS helped us focus on our development to bring this powerful open source solution to the community.

    Illumina - Building Global Scale Applications on RDS PostgreSQL
    Illumina is a leading developer, manufacturer and marketer of life science tools and integrated systems for large-scale analysis of genetic variation and function. Greg Roberts, Development Lead for the BaseSpaces product, talked about RDS PostgreSQL:

    Using BaseSpace, biologists and informaticians can easily and securely analyze, archive and share sequencing data. We were able to leverage RDS for PostgreSQL since first day of launch for our BaseSpace project. We scaled our instances seamlessly as our project grew.

    Automated backups and automated failovers with Multi-AZ provided us the high availability and protection against data loss that our customers expect.

    TigerLogic - Social Media Analytics at Postano on RDS PostgreSQL
    TigerLogic Corporation has been helping companies make better use of their data for more than three decades. Their Postano Platform helps leading brands to create engaging social visualizations. Danny Hyun, Director of Engineering for TigerLogic had a lot to say about RDS for PostgreSQL:

    The Postano Platform helps leading brands create highly engaging social visualizations leveraging the top social networks in use today. Working with RDS PostgreSQL since its launch gave us the confidence we could build a scalable, robust solution for our customers without the hassles of dealing with database management. The Platform processes more than a million social messages daily, and during special occasions - like national and global award shows and sporting events - more than 10 million messages will be stored and read in our RDS PostgreSQL instance in a very short period of time. Leveraging the abilities of the most powerful RDS instance, cr1.8xlarge with Multi-AZ capabilities, we were able to bring our production database workloads onto RDS early on.

    Get Started Today
    You can get started with Amazon RDS for PostgreSQL today. You can launch an intance with a couple of clicks in the AWS Management Console . To learn more, visit the RDS for PostgreSQL page or the RDS Documentation.

    -- Jeff;

    Read more »
  • New Annual Pricing for AWS Marketplace Products

    14 Jul 2014 in AWS Marketplace | permalink

    I'm writing today to tell you about an important new feature for AWS Marketplace. As you may know, the AWS Marketplace lets you find, buy, and immediately start using a wide variety of software for developers and enterprises — 26 categories that span infrastructure, developer tools, and business applications.

    You can now purchase AWS Marketplace products on an annual subscription basis. You will make a single upfront payment by selecting the annual option and an Amazon Elastic Compute Cloud instance type and have unlimited use of the software for that EC2 instance for the next 12 months.

    Annual subscriptions are be available for more than 90 software products from ISVs such as Alert Logic, Barracuda, Citrix, Fortinet, MicroStrategy, Progress Software, Riverbed, Sophos, Tenable, and Vormetric.

    You can purchase an annual subscription with a couple of clicks:

    • Find the desired product, note the annual subscription price, and click Continue.
    • Specify the number of subscriptions that you would like to buy.
    • Click Accept Terms & Launch.

    Benefits of Annual Pricing
    This new option provides you with several important benefits:

    • Predictability - The annual pricing model will allow you to make more accurate forecasts of your software expenses when you are running steady-state workloads.
    • Cost Savings - You can reduce your software costs by 10% to 40% or more when you purchase an annual subscription for select software products on AWS Marketplace. You can continue to pay for usage on an hourly basis in situations where your workload is bursty.
    • Flexibility - You can run the software in any Availability Zone of any AWS Region in which the software is offered. The purchase is, however, specific to a particular EC2 instance type.
    • Ease of Use - You can change your software pricing model without restarting any instances or re-launching any applications.
    • Uniformity - As is the case with hourly pricing, all annual subscription charges will appear on your AWS bill. You don't have to set up any new accounts or share the payment information with a third party.

    The Details
    If you are currently paying for a product on an hourly basis, you can convert to an annual subscription pricing by simply buying annual subscription(s) as needed. You do not need to restart the instance or re-launch the application.

    Let's walk through the process of purchasing a product through AWS Marketplace. The first step is to search for the product you are interested in. In this case I am lookingfor the Sophos UTM 9 security gateway:

    Then I review the pricing details:

    And make the purchase:

    There will be a short wait while we spin up the EC2 instance, deploy the software and purchase your annual subscription:

    Once the software is running, you can click the Usage Instructions button to check on the next steps. If you want to add, cancel or change a subscription you can always go to the Your Software Subscriptions page to manage your software:

    If you are an ISV and your software is already in AWS Marketplace, you can offer an annual subscription by submitting annual prices for your product.

    Available Now
    As I mentioned earlier, annual pricing is available in AWS Marketplace now!

    -- Jeff;

  • Amazon SNS TTL (Time to Live) Control

    14 Jul 2014 in Amazon SNS | permalink

    Amazon Simple Notification Service is a fast and flexible push messaging service.

    Many of the messages that you can send with SNS are relevant or valuable for a limited period of time. Sports scores, weather notifications, and "flash sale" announcements can all get stale in a short period of time. In situations where devices are offline or disconnected, flooding the user with outdated messages when communication is reestablished makes for a poor user experience.

    In order to allow you to build great applications that behave well in an environment with real-time information and intermittent connections, Amazon SNS now allows you to set a TTL (Time to Live) value of up to two weeks for each message. Messages that remain undelivered for the given period of time (expressed as a number of seconds since the message was published) will expire and will not be delivered. Read about How to Use Time to Live Values.

    Most of the underlying push services support TTL in some fashion, but each one uses a unique set of APIs and data formats. With today's release, SNS now lets you use a common format and the cross-platform Publish API to define TTL values for iOS, Android, Fire OS, Windows WNS, and Baidu endpoints (Windows MPNS does not support TTL)

    You can set the TTL through the SNS API or the AWS Management Console:

    This new feature, in conjunction with our recent launch of mobile push support for Windows (phone and desktop) and Baidu Cloud Push (Android), will help you to build helpful, user-friendly applications that reach a broad user base without having to deal with a multitude of messaging providers.

    -- Jeff;

  • Store and Monitor OS & Application Log Files with Amazon CloudWatch

    14 Jul 2014 in Amazon EC2, CloudWatch | permalink

    When you move from a static operating environment to a dynamically scaled, cloud-powered environment, you need to take a fresh look at your model for capturing, storing, and analyzing the log files produced by your operating system and your applications. Because instances come and go, storing them locally for the long term is simply not appropriate. When running at scale, simply finding storage space for new log files and managing expiration of older ones can become a chore. Further, there's often actionable information buried within those files. Failures, even if they are one in a million or one in a billion, represent opportunities to increase the reliability of your system and to improve the customer experience.

    Today we are introducing a powerful new log storage and monitoring feature for Amazon CloudWatch. You can now route your operating system, application, and custom log files to CloudWatch, where they will be stored in durable fashion for as long as you'd like. You can also configure CloudWatch to monitor the incoming log entries for any desired symbols or messages and to surface the results as CloudWatch metrics. You could, for example, monitor your web server's log files for 404 errors to detect bad inbound links or 503 errors to detect a possible overload condition. You could monitor your Linux server log files to detect resource depletion issues such as a lack of swap space or file descriptors. You can even use the metrics to raise alarms or to initiate Auto Scaling activities.

    Vocabulary Lesson
    Before we dig any deeper, let's agree on some basic terminology! Here are some new terms that you will need to understand in order to use CloudWatch to store and monitor your logs:

    • Log Event - A Log Event is an activity recorded by the application or resource being monitored. It contains a timestamp and raw message data in UTF-8 form.
    • Log Stream - A Log Stream is a sequence of Log Events from the same source (a particular application instance or resource).
    • Log Group - A Log Group is a group of Log Streams that share the same properties, policies, and access controls.
    • Metric Filters - The Metric Filters tell CloudWatch how to extract metric observations from ingested events and turn them in to CloudWatch metrics.
    • Retention Policies - The Retention Policies determine how long events are retained. Policies are assigned to Log Groups and apply to all of the Log Streams in the group.
    • Log Agent - You can install CloudWatch Log Agents on your EC2 instances and direct them to store Log Events in CloudWatch. The Agent has been tested on the Amazon Linux AMIs and the Ubuntu AMIs. If you are running Microsoft Windows, you can configure the ec2config service on your instance to send systems logs to CloudWatch. To learn more about this option, read the documentation on Configuring a Windows Instance Using the EC2Config Service.

    Getting Started With CloudWatch Logs
    In order to learn more about CloudWatch Logs, I installed the CloudWatch Log Agent on the EC2 instance that I am using to write this blog post! I started by downloading the install script:

    $ wget https://s3.amazonaws.com/aws-cloudwatch/downloads/awslogs-agent-setup-v1.0.py
    

    Then I created an IAM user using the policy document provided in the documentation and saved the credentials:

    I ran the installation script. The script downloaded, installed, and configured the AWS CLI for me (including a prompt for AWS credentials for my IAM user), and then walked me through the process of configuring the Log Agent to capture Log Events from the /var/log/messages and /var/log/secure files on the instance:

    Path of log file to upload [/var/log/messages]: 
    Destination Log Group name [/var/log/messages]: 
    
    Choose Log Stream name:
      1. Use EC2 instance id.
      2. Use hostname.
      3. Custom.
    Enter choice [1]: 
    
    Choose Log Event timestamp format:
      1. %b %d %H:%M:%S    (Dec 31 23:59:59)
      2. %d/%b/%Y:%H:%M:%S (10/Oct/2000:13:55:36)
      3. %Y-%m-%d %H:%M:%S (2008-09-08 11:52:54)
      4. Custom
    Enter choice [1]: 1
    
    Choose initial position of upload:
      1. From start of file.
      2. From end of file.
    Enter choice [1]: 1
    

    The Log Groups were visible in the AWS Management Console a few minutes later:

    Since I installed the Log Agent on a single EC2 instance, each Log Group contained a single Log Stream. As I specified when I installed the Log Agent, the instance id was used to name the stream:

    The Log Stream for /var/log/secure was visible with another click:

    I decided to track the "Invalid user" messages so that I could see how often spurious login attempts were made on my instance. I returned to the list of Log Groups, selected the stream, and clicked on Create Metric Filter. Then I created a filter that would look for the string "Invalid user" (the patterns are case-sensitive):

    As you can see, the console allowed me to test potential filter patterns against actual log data. When I inspected the results, I realized that a single login attempt would generate several entries in the log file. I was fine with this, and stepped ahead, named the filter and mapped it to a CloudWatch namespace and metric:

    I also created an alarm to send me an email heads-up if the number of invalid login attempts grows to a suspiciously high level:

    With the logging and the alarm in place, I fired off a volley of spurious login attempts from another EC2 instance and waited for the alarm to fire, as expected:

    I also have control over the retention period for each Log Group. As you can see, logs can be retained forever (see my notes on Pricing and Availability to learn more about the cost associated with doing this):

    Elastic Beanstalk and CloudWatch Logs
    You can also generate CloudWatch Logs from your Elastic Beanstalk applications. To get you going with a running start, we have created a sample configuration file that you can copy to the .ebextensions directory at the root of your application. You can find the files at the following locations:

    Place CWLogsApache-us-east-1.zip in the folder, then build and deploy your application as normal. Click on the Monitoring tab in the Elastic Beanstalk Console, and then press the Edit button to locate the new resource and select it for monitoring and graphing:

    Add the desired statistic, and Elastic Beanstalk will display the graph:

    To learn more, read about Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.

    Other Logging Options
    You can push log data to CloudWatch from AWS OpsWorks, or through the CloudWatch APIs. You can also configure and use logs using AWS CloudFormation .

    In a new post on the AWS Application Management Blog, Using Amazon CloudWatch Logs with AWS OpsWorks, my colleague Chris Barclay shows you how to use Chef recipes to create a scalable, centralized logging solution with nothing more than a couple of simple recipes.

    To learn more about configuring and using CloudWatch Logs and Metrics Filters through CloudFormation, take a look at the Amazon CloudWatch Logs Sample. Here's an excerpt from the template:

    "404MetricFilter": {
        "Type": "AWS::Logs::MetricFilter",
        "Properties": {
            "LogGroupName": {
                "Ref": "WebServerLogGroup"
            },
            "FilterPattern": "[ip, identity, user_id, timestamp, request, status_code = 404, size, ...]",
            "MetricTransformations": [
                {
                    "MetricValue": "1",
                    "MetricNamespace": "test/404s",
                    "MetricName": "test404Count"
                }
            ]
        }
    }
    

    Your code can push a single Log Event to a Long Stream using the putLogEvents function. Here's a PHP snippet to get you started:

    $result = $client->putLogEvents(array(
        'logGroupName'  => 'AppLog,
        'logStreamName' => 'ThisInstance',
        'logEvents'     => array(
            array(
                'timestamp' => time(),
                'message'   => 'Click!',
            )
        ),
        'sequenceToken' => 'string',
    ));
    

    Pricing and Availability
    This new feature is available now in US East (Northern Virginia) Region and you can start using it today.

    Pricing is based on the volume of Log Entries that you store and how long you choose to retain them. For more information, please take a look at the CloudWatch Pricing page. Log Events are stored in compressed fashion to reduce storage charges; there is 26 bytes of storage overhead per Log Event.

    -- Jeff;

    Read more »
  • AWS Week in Review - July 7, 2014

    14 Jul 2014 | permalink

    Let's take a quick look at what happened in AWS-land last week:

    Monday, July 7
    Tuesday, July 8
    Wednesday, July 9
    Thursday, July 10
    Friday, July 11

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

    -- Jeff;

  • Amazon Zocalo - Document Storage and Sharing for the Enterprise

    10 Jul 2014 in Zocalo | permalink

    I have been writing this blog for almost ten years! For most of that time, my workflow for writing and reviewing drafts has revolved around my email inbox. I write a draft and then hand it off to the Product Manager for review. The Product Manager, in turn, will hand the draft off to their team and to other stakeholders and reviewers within the company. Given the pace of AWS development, I often have between five and ten drafts underway at any given time. Reconciling overlapping suggestions for edits, sometimes spread across multiple drafts is tedious and error-prone. It is clear to me (and to my colleagues) that email inboxes are not appropriate venues for efficiently and securely sharing and reviewing complex documents. We decided to "scratch our own itch" and to create a document hub that would relieve the load on our inboxes and also add some structure to the process. Given that our enterprise customers have been asking us to provide them with secure storage and sharing, we decided to build a new product!

    Introducing Zocalo
    Today we are introducing Amazon Zocalo. This is a fully managed, secure document storage and sharing service designed specifically for the needs of the enterprise. As you will see as you review this post, Zocalo provides users with secure access to documents, regardless of their location, device, or formal relationship to the organization. As the owner of a document, you can selectively share it with others (inside or outside of your organization), and you can ask them for feedback, optionally subject to a deadline that you specify.

    Zocalo gives you simple, straightforward access to your documents anytime and from anywhere, regardless of location or device. Zocalo supports versioned review and markup of a multitude of document types, and was designed to allow security-conscious administrators to control and audit access to accounts and documents.

    With centralized user management (optionally linked to your existing Active Directory) and tight control over sharing, Zocalo prevents boundaries from becoming accidentally blurred. All documents are stored in a designated AWS Region and transmitted in encrypted form. You, as the document owner, can even opt to disallow downloading for extra protection.

    You can install the Zocalo client application on your desktop and laptop computers running Windows 7 or MacOS (version 10.7 or later) and designate a folder for syncing. Once you do so, saving a file to the folder will automatically upload them to Zocalo across an encrypted connection and sync them to your other devices. You can also access Zocalo from your iPad, Kindle Fire, and Android tablets.

    In the remainder of this post I will take a look at Zocalo from three points of view. You will see what it is like to be a document owner, a reviewer, and a Zocalo administrator.

    Zocalo for Users
    Assuming that you are already known to Zocalo (see Zocalo for Administrators to learn more about accounts and passwords), you can simply visit your organization's Zocalo site and log in. The URL to the site is specific to the organization. Here's where I started:

    And here's what I saw after I logged in:

    I can create folders and sub-folders as needed and I can add documents to the folder by simply dragging and dropping. I uploaded an early draft of this post:

    Zocalo can accommodate files of up to 5 GB. You can upload files of any type; Zocalo will render Office documents, PDFs, images, and text files.

    I shared the draft with Paul and Cynthia (two of my colleagues) and asked them to review it for me:

    Zocalo shows me their status:

    As you may have noticed earlier, I can create folders in Zocalo and store my documents inside. Permissions applied to a folder apply to all of the documents within it, making it easy for me to use folders to organize my documents by project or by team.

    I took a short break to check on my garden (it was doing fine) and waited for some feedback. I clicked on Activity to see how things were going. Paul and Cynthia had both left comments within a few minutes (we work at lightning speed at Amazon):

    Then I clicked on Feedback to see what they had to say about my first draft. The feedback is organized by version, and is further broken down into an overall comment and individual items, grouped by page:

    Then I clicked to see what Paul had to say:

    As you can see, clicking on a piece of feedback highlights the target area in the document and also connects it to the comment. Each reviewer has their own, unique color code as well.

    The next step for me would be to read and digest all of the comments, edit the document, and upload another version for further review using the menu at the top:

    If the document is in Microsoft Word format, I can also download a version that includes all of the comments entered by the reviewers.

    There's a lot more to cover, but I'm just getting started and this post is already kind of long! You can try this out for yourself through the Zocalo Limited Preview.

    Zocalo for Reviewers
    Now I'd like to take a look at the sharing and reviewing process from the other side of the fence. I can easily see the documents that have been shared with me for review:

    I can click on any document and open it up to read and comment on it. Zocalo shows me how to give feedback in a handy popup:

    I simply highlight any text or any region of the document and enter my feedback:

    When I have finished my review I need to send the comments to the owner of the document with a click of the Send button:

    As you saw earlier, the owner of the document will be able to see my edits and will (with any luck) use them to produce another version.

    Once again, I have just scratched the surface of the document sharing and review features that are available in Zocalo. Let's take a look at the administrative side of Zocalo!

    Zocalo for Administrators
    Each Zocalo account must have at least one administrator. The administrator is responsible for creating and managing user accounts, setting up security policies, managing storage limits, and generating auditing and activity reports.

    As the administrator in charge of setting up and running Zocalo, you will begin with the AWS Management Console:

    You can choose Quick Start to get going quickly or Full Setup to connect to your on-premises user directory.

    I chose the Quick Start and entered a few parameters to get started:

    Minutes later my site was all set up and I was ready to go, with notification via a convenient email:

    I set up a password and become the official administrator of my very own Zocalo site!

    I logged in and explored the Dashboard:

    The Dashboard allows me to set the amount of storage per Zocalo user. By default, new users get 200 GB of storage for free. The administrator can choose to allow additional storage, which is billed on a per-GB, per-month basis.

    I can control the level of document sharing for the site — unlimited external sharing, sharing to a short list of domains, or no external sharing:

    Here's how I enter a list of domains:

    I can also manage the invitation model. Users can be allowed to invite others within any domain or in a short list of domains, or this entire feature can be restricted to users with administrator privileges:

    I can invite people to become new Zocalo users:

    Once my Zocalo site has some users, I can monitor and control their storage utilization, and see an audit log of document activity.

    Pricing and Availability
    You can join the Zocalo Limited Preview to experience Zocalo on your own.

    Zocalo was designed to work smoothly with Amazon WorkSpaces. Each WorkSpaces user has access to 50 GB of Zocalo storage, the Zocalo web application, the tablet apps, and document review at no additional charge. The Zocalo administrator can upgrade these users to 200 GB of storage for just $2 per user per month.

    If you don't use Amazon WorkSpaces, Zocalo is priced at $5 per user per month, including 200 GB of storage for each user. Additional storage is billed on a per-GB, per-month basis using a tiered pricing model. See the Zocalo Pricing page for more info.

    Zocalo is currently available in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) Regions. All documents for a particular Zocalo site are stored in encrypted form within the chosen Region.

    -- Jeff;

    Read more »
  • New AWS Mobile Services

    The Mobile App Development Challenge
    We want to make it easier for you to build sophisticated cloud-powered applications for mobile devices! User expectations are at an all-time high: they want to run your app on the device of their choice, they want it to be fast and efficient, and they want it to be secure. Here are some of the challenges that you will face as you strive to meet these expectations:

    • Authenticate Users - Manage users and identity providers.
    • Authorize Access - Securely access cloud resources.
    • Synchronize Data - Sync user preferences across devices.
    • Analyze User Behavior - Track active users and engagement.
    • Manage Media - Store and share user-generated photos and other media items.
    • Deliver Media - Automatically detect mobile devices and deliver content quickly on a global basis.
    • Send Push Notifications - Keep users active by sending messages reliably.
    • Store Shared Data - Store and query NoSQL data across users and devices.
    • Stream Real-Time Data - Collect real-time clickstream logs and react quickly.

    Meeting the Challenge
    Today we are introducing three new AWS products and services to help you to meet these challenges.

    Amazon Cognito simplifies the task of authenticating users and storing, managing, and syncing their data across multiple devices, platforms, and applications. It works online or offline, and allows you to securely save user-specific data such as application preferences and game state. Cognito works with multiple existing identity providers and also supports unauthenticated guest users.

    Amazon Mobile Analytics will help you to collect, visualize, and understand app usage, engagement, and revenue at scale. Analytics can be collected via the AWS Mobile SDK or a set of REST APIs. Metrics are available through a series of reporting tabs in the AWS Management Console.

    The updated and enhanced AWS Mobile SDK is designed to help you build high quality mobile apps quickly and easily. It provides access to services specifically designed for building mobile apps, mobile-optimized connectors to popular AWS data streaming, storage and database services, and access to a full array of other AWS services. This SDK also includes a common authentication mechanism that works across all of the AWS services, client-side data caching, and intelligent conflict resolution. The SDK can be used to build apps for devices that run iOS, Android, and Fire OS.

    Taken as a whole, these services will help you to build, ship, run, monitor, optimize, and scale your next-generation mobile applications for use on iOS, Android, and Fire OS devices. The services are part of the full lineup of AWS compute, storage, database, networking, and analytics services, which are available to you and your users from AWS Regions location in the United States, South America, Europe, and Asia Pacific.

    Here is how the new and existing AWS services map to the challenges that I called out earlier:

    Let's take a closer look at each of the new services!

    Amazon Cognito
    Amazon Cognito helps you identify unique users, retrieve temporary, limited-privilege AWS credentials and also offers data synchronization services.

    As you might know, an Identity Provider is an online service that is responsible for issuing identification information for users that would like to interact with the service or with other cooperating services. Cognito is designed to interact with three major identity providers (Amazon, Facebook, and Google). You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own. You no longer have to worry about recognizing users or storing and securing passwords when you use Cognito.

    Cognito also supports guest user access. In conjunction with AWS Identity and Access Management and with the aid of the AWS Security Token Service, mobile users can securely access AWS resources and app features, and even save data to the cloud without having to create an account or log in. However, if they choose to do this later, Cognito will merge data and identification information if the user ultimately decides to log in. Here's how it all fits together:

    Here's what you need to do to get started with Cognito:

    1. Sign up for an AWS Account.
    2. Register your app on the identity provider's console and get the app ID or token. This is an optional step; you can also choose to use only unauthenticated identities
    3. Create a Cognito identity pool in the Management Console.
    4. Integrate the AWS Mobile SDK; store and sync data in a dataset.

    You can create and set up the identity pool in the Console:

    Once your application is published and in production, you can return to the Console and view the metrics related to the pool:

    Let's talk about Cognito's data synchronization facility! The client SDK manages a local SQLite store so that the application can work even when it is not connected. The store functions as a cache and is the target of all read and write operations. Cognito's sync facility compares the local version of the data to the cloud version, and pushes up or pulls down deltas as needed. By default, Cognito assumes that the last write wins. You can override this and implement your own conflict resolution algorithm if you'd like. There is a small charge for each sync operation.

    Each identity within a particular identity pool can store multiple datasets, each comprised of multiple key/value pairs:

    Each dataset can grow to 1 MB and each identity can grow up to 20 MB.

    You can create or open a dataset and add key/value pairs with a couple of lines of code:

    DataSet *dataset = [syncClient openOrCreateDataSet:@"myDataSet"];
    NSString *value = [dataset readStringForKey:@"myKey"];[dataset putString:@"my value" forKey:@"myKey"];
    

    Charges for Cognito are based on the total amount of application data stored in the cloud and the number of sync operations performed on the data. The Amazon Cognito Free Tier provides you with 10 GB of sync store and 1,000,000 sync operations per month for the first 12 months of usage. Beyond that, sync store costs $0.15 per GB of storage per month and $0.15 for each 10,000 sync operations.

    Take a look at the Cognito documentation (Android and iOS) to learn more about this and other features.

    Mobile Analytics
    Once you have built your app, you need to track usage and user engagement, improving and fine-tuning the app and the user interaction in response to user feedback. The Amazon Mobile Analytics service will give you the information and the insights that you need to have in order to do this.

    Using the raw data ("events") collected and uploaded by your application, Amazon Mobile Analytics automatically calculated and updates the following metrics:

    • Daily Active Users (DAU), Monthly Active Users (MAU), and New Users
    • Sticky Factor (DAU divided by MAU)
    • Session Count and Average Sessions per Daily Active User
    • Average Revenue per Daily Active User (ARPDAU)
    • and Average Revenue per Paying Daily Active User (ARPPDAU)
    • Day 1, 3, and 7 Retention
    • Week 1, 2, and 3 Retention
    • Custom Events

    In order for your application to be able to upload events, you can create an identity pool and use the AWS Mobile SDK (or the REST API) to call the appropriate reporting functions. There are three types of events:

    • System - Start or end of a session
    • In-App Purchase - Transaction.
    • Custom - Specific actions within your application.

    When you use the AWS Mobile SDK, the system events denoting the start and end of each session are sent automatically. Your application code is responsible for sending the other types of events at the appropriate time.

    All of the metrics are available from within the AWS Management Console, broken down by tab:

    The main page includes plenty of top-level information about your app and your users:

    You can click on a tab to learn even more:

    You can also filter by application, date range, and/or platform, as needed:

    Pricing is based on the number of events that your app generates each month. The first 100 million events are free; beyond that, you will be charged $1.00 for each million events.

    AWS Mobile SDK
    Last, but definitely not least, I would like to say a few words about the updated and expanded AWS Mobile SDK ! This SDK makes it easy for you to build applications for devices that run the iOS, Android, or Fire OS operating systems.

    Here are some of the new features:

    Object Mapper - A new DynamodB Object Mapper for iOS makes it easy for you to access DynamoDB from your mobile apps. It enables you to map your client-side classes to Amazon DynamoDB tables without having to write the code to transform objects into tables and vice versa. The individual object instances map to items in a table, and they enable you to perform various create, read, update, and delete (CRUD) operations on items, and to execute queries.

    S3 Transfer Manager - Amazon S3 Transfer Manager makes it easy for you to upload and download files from Amazon S3 while optimizing for performance and reliability. Now you can pause, resume, and cancel file transfer using a simple API. We have rebuilt the iOS S3TransferManager to utilize BFTask. It has a clean interface, and all of the operations are now asynchronous.

    Android and Fire OS Enhancements - In addition to support for the services announced in this post, the SDK now supports Amazon Kinesis Recorder for reliable recording of data streams on mobile devices, along with support for the most recent SQS, SNS, and DynamoDB features. It also allows active requests to be terminated by interrupting the proper thread.

    iOS / Objective-C Enhancements - The SDK supports ARC and BFTask, and conforms to the best practices for the use of Objective-C. It also supports Cocoapods, and can be accessed from Apple's new Swift language.

    -- Jeff;

    PS - Special thanks to my friend and colleague Jinesh Varia for creating the screen shots and diagrams in this post!

    Read more »