AWS Official Blog

  • Coming Soon – AWS SDK for Go

    by Jeff Barr | on | Developer Tools, Go |

    My colleague Peter Moon wrote the guest post below and asked me to get it out to the world ASAP!

    — Jeff;

    AWS currently offers SDKs for seven different programming languages – Java, C#, Ruby, Python, JavaScript, PHP, and Objective C (iOS), and we closely follow the language trends among our customers and the general software community. Since its launch, the Go programming language has had a remarkable growth trajectory, and we have been hearing customer requests for an official AWS SDK with increasing frequency. We listened and decided to deliver a new AWS SDK to our Go-using customers.

    As we began our research, we came across aws-go, an SDK from Stripe. This SDK, principally authored by Coda Hale, was developed using model-based generation techniques very similar to how our other official AWS SDKs are developed. We reached out and began discussing possibly contributing to the project, and Stripe offered to transfer ownership of the project to AWS. We gladly agreed to take over the project and to turn it into an officially supported SDK product.

    The AWS SDK for Go will initially remain in its current experimental state, while we gather the community’s feedback to harden the APIs, increase the test coverage, and add some key features including request retries, checksum validation, and hooks to request lifecycle events. During this time, we will be developing the SDK in a public GitHub repository at We invite our customers to follow along with our progress and join the development efforts by submitting pull requests and sending us feedback and ideas via GitHub Issues.

    We’d like to thank our friends at Stripe for doing an excellent job with starting this project and helping us bootstrap this new SDK.

    Peter Moon, Senior Product Manager

  • Zend Server 8 – New Monitoring and Performance Tools

    by Jeff Barr | on | Developer Tools, PHP |

    Late last week I met with Andi Gutmans and Michel Gerin of Zend Technologies. Because Andi is a self-described “coffee snob,” we headed directly to the nearby Dilettante Cafe for an in-depth chat. It was interesting to hear how they had grown from an organization focused on the PHP language to one that was taking on the more broad mission of scalability, monitoring, and visibility in to the run-time state of web and mobile applications that happened to be built using PHP. In fact, the only mention of PHP came when I remarked that we had spent no time discussing it. Andi spent more time telling me about his quest for the perfect microfoam than he did about language features!

    Zend Server Update
    We discussed their work on Zend Server, including the freshly released Version 8. As I described in a post that I wrote last year, Zend Server (and the crucial Z-Ray technology) gives developers access to in-content feedback on the behavior of the application that they are building, testing, or running.  Z-Ray provides developers with information about page requests, execution time, peak memory usage, events, PHP errors & warnings, SQL query execution, and variables.

    I learned that Zend Server also has a number of other features that help applications run quickly and efficiently. These were not the subject of our chat and I didn’t take good notes, but we talked about code & data caching, job queues, and job scheduling. We also discussed cluster management and some new AWS integration. Zend Server can now be launched via AWS CloudFormation. It even includes a CloudFormation template generator to make this process simpler and totally repeatable:

    Zend Server knows how to deploy code from Amazon Simple Storage Service (S3) (it can also pull from Git or deploy Zend Packages, also known as ZPKs).

    Z-Ray Demo
    Andi was eager to demo the newest Z-Ray features for me. He fired up his laptop (a stylish MacBook Air), got the Wi-Fi password from the barista, and connected to his demo instance.  He explained to me that they created an extensibility model for Z-Ray and used it to create a series of extensions for popular applications and frameworks. Each extension has intimate knowledge of the programming model, data structures, and database queries built and referenced by the associated environment.  This intimacy allows each extension to display the most important elements of each environment in a manner that will be familiar and comfortable to developers who are already versed in the environment.

    Out of the box (to use that tired term left over from the by-gone era of shrink-wrap software), Z-Ray includes extensions for the Magento, Drupal, and WordPress application platforms. It also includes extensions for the Zend Framework, Symfony, and Laravel application frameworks. These extensions are available from the Official Zend Server Extensions repo on GitHub. Here’s an example of Z-Ray in action. It is aware that it is accessing the database queries initiated by a WordPress application and the display is customized accordingly:

    The extension API is open and documented. Third-party (non-Zend) developers have already created extensions for other environments including Doctrine 2 (read more about the extension).

    Zend Server on AWS
    The Developer and Professional editions of Zend Server are available on the AWS Marketplace and you can launch a free trial of either one with a couple of clicks:


    Both editions include a bunch of features that are intended to make Zend Server mesh smoothly with existing AWS environments and applications. Here are some of the features:

    • A new, JSON-based format for the EC2 user data that is passed to each newly launched instance. This data is used to configure the Zend Server.
    • A Z-Ray extension for the AWS API.
    • Custom script actions on startup.
    • Control over the dissemination of AWS access and secret keys to instances.
    • Control over cluster membership.

    To learn more about these features, watch Zend’s new video, Getting Started with Zend Server and Z-Ray on AWS.

    We wrapped up our meeting, recycled our mugs (this is Seattle, after all), and they headed back to Sea-Tac airport for their flight back to Silicon Valley!


    PS – I love to learn and write about cool uses of AWS. Please track me down (a search for ‘contact jeff barr’ is a good start) and let me know when you are coming to Seattle!

  • Amazon WorkMail – Managed Email and Calendaring in the AWS Cloud

    by Jeff Barr | on | Amazon WorkDocs, Amazon WorkMail, AWS Directory Service, AWS IAM, Key Management Service, Zocalo |

    Have you ever had to set up, run, and scale an email server? While it has been a long time since I have done this on my own, I do know that it is a lot of work! Users expect to be able to access their email from the application, device, or browser of their choice. They want to be able to send and receive large files (multi-megabyte video attachments and presentations often find their way in to my inbox). Email administrators and CSO’s are looking for robust security measures.

    Paradoxically, email is both mission-critical and pedestrian. Everyone needs it to work, but hardly anyone truly understands what it takes to make this happen!

    Introducing WorkMail
    Today I would like to introduce Amazon WorkMail. This managed email and calendaring solution runs in the Cloud. It offers a unique set of security controls and works with your existing desktop and mobile clients (there’s also a browser-based interface). If your organization already has a directory of its own, WorkMail can make use of it via the recently introduced AWS Directory Service. If not, WorkMail will use Directory Service to create a directory for you as part of the setup process.

    WorkMail was designed to work with your existing PC and Mac-based Outlook clients including the prepackaged Click-to-Run versions. It also works with mobile clients that speak the Exchange ActiveSync protocol.

    Our 30-day free trial will give you the time and the resources to evaluate WorkMail in your own environment. As part of the trial, you can serve up to 25 users, with 50 gigabytes of email storage per employee. In order to help you to move your organization to WorkMail, we also provide you with a mailbox migration tool.

    WorkMail makes use of a number of AWS services including Amazon WorkDocs (formerly known as Amazon Zocalo), the Directory Service, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), and Amazon Simple Email Service (SES).

    WorkMail Features
    You can set up WorkMail for a new organization in a matter of minutes. As I mentioned earlier, you can use your existing directory or you can have WorkMail set one up for you. You can send and receive email through your existing domain name by adding a TXT record (for verification of ownership) and an MX record (to route the mail to WorkMail to your existing DNS configuration).

    As a WorkMail user, you have access to all of the usual email features including calendaring, calendar sharing, tasks, contact lists, distribution lists, resource booking, public folders, and out-of-office (OOF) messages.

    The browser-based interface has a full array of features. It works with a wide variety of browsers including Firefox, Chrome, Safari, and newer (IE 9 and higher) versions of Internet Explorer. The interface gives you access to email, calendars, contacts, and tasks. You can access shared calendars and public folders, book resources, and manage your OOF.

    WorkMail was designed to work in today’s data-rich, email-intensive environments. Each inbox has room for up to 50 gigabytes of messages and attachments. Messages can range in size all the way up to 30 megabytes.

    As part of this launch we are renaming Amazon Zocalo to Amazon WorkDocs! WorkMail can be used in conjunction with WorkDocs for simple, controlled distribution of documents that contain sensitive information.

    WorkMail Security Controls

    Let’s talk about security for a bit. WorkMail includes a number of security features and controls that will allow it to meet the needs of many types of organizations. Here’s an overview of some of the most important features and controls:

    • Location Control – The WorkMail administrator can choose to create mailboxes in any supported AWS region. All mail and other data will be stored within the region and will not be transferred to any other region. During the Preview, WorkMail will be supported in the US East (Northern Virginia) and Europe (Ireland) regions, with more to follow over time.
    • S/MIME – Data in transit to and from Outlook clients and certain iPhone and iPad apps is encrypted using S/MIME. Data in transit to other clients is encrypted using SSL.
    • Stored Data Encryption – Data at rest (messages, contacts, attachments, and metadata) is encrypted using keys supplied and managed by KMS.
    • Message Scanning – Incoming and outgoing email messages and attachments are scanned for malware, viruses, and spam.
    • Mobile Device Policies & Actions – The WorkMail administrator can selectively require encryption, password protection, and automatic screen locking for mobile devices. The administrator can also remotely wipe a lost or mislaid mobile device if necessary.

    Getting Started with WorkMail
    Let’s walk through WorkMail while wearing our email administrator hats! I need to create a WorkMail organization. In most cases, I would use a single organization for an entire company.

    I start by opening up the AWS Management Console and choosing WorkMail:

    I click the Get started button. At this point I can choose between a Quick setup (WorkMail will create a new directory for me)  or a Custom setup (WorkMail will use an existing directory that I configure):

    I’ll go for the quick setup today. I need to pick a unique name for my organization:

    This will automatically create a directory and then create and initialize my organization. It will also initiate the Amazon SES domain verification process (for in this case) and create a set of DKIM keys so that I can send DKIM-signed mail. The entire process takes 10 to 20 minutes and requires no additional work on my part. The organization’s status will start out as creating and will transition to active before too long:

    After the creation process completes I can begin to add WorkMail users to my organization (if I had used an existing directory in the previous step I could simply select them from a list at this point). I’ll begin by adding myself:

    Then  I specify the email address and password. If I have associated one or more domain names with the organization, I can use the name as the basis for the email address:

    I can browse all of the organization’s users:

    I can also create groups, attach domains, and manage mobile device policies, all from the Console.

    The WorkMail Browser-Based Interface
    Let’s take a look at the browser-based interface to WorkMail. Here’s my inbox:

    And my calendar:

    This is just a sampling of the features that are available in the WorkMail.

    Pricing and Availability
    We are launching a Preview of Amazon WorkMail in the US East (Northern Virginia) and Europe (Ireland) regions today and you can sign up for the Preview if you are interested in joining.

    After the 30-day free trial (25 users and 50 gigabytes of storage per user), pricing is on a per-user, pay-as-you-go basis. You will be charged $4 per month for a 50 gigabyte WorkMail mailbox, or $6 per month for a bundle that includes WorkMail and WorkDocs. There is no separate charge for the use of SES to send messages.


  • Amazon DynamoDB Update – Online Indexing & Reserved Capacity Improvements

    by Jeff Barr | on | Amazon DynamoDB |

    Developers all over the world are using Amazon DynamoDB to build applications that take advantage of its ability to provide consistent low-latency performance. The developers that I have talked to enjoy the flexibility provided by DynamoDB’s schemaless model, along with the ability to scale capacity up and down as needed.  They also benefit from the DynamoDB Reserved Capacity model in situations where they are able to forecast their need for read and write throughput ahead of time.

    A little over a year ago we made DynamoDB more flexible by adding support for Global Secondary Indexes. This important feature moved DynamoDB far beyond its roots as a key-value store by allowing lookups on attributes other than the primary key.

    Today we are making Global Secondary Indexes even more flexible by giving you the ability to add and delete them from existing tables on the fly.

    We are also making it easier for you to purchase Reserved Capacity directly from the AWS Management Console. As part of this change to a self-service model, you can now purchase more modest amounts of Reserved Capacity than ever before.

    Let’s zoom in for a closer look!

    Global Secondary Indexes on the Fly
    Up until now you had to define the Global Secondary Indexes for each of your DynamoDB tables at the time you created the table. This static model worked well in situations where you fully understood your data model and a good sense for the kinds of queries that you needed to use to build your application.

    DynamoDB’s schemaless model means that you can add new attributes to an existing table by simply storing them. Perhaps your original table stored a first name, a last name, and an email address. Later, you decided to make your application location-aware by adding a zip code. With today’s release you can add a Global Secondary Index to the existing table. Even better, you can do this without taking the application offline or impacting the overall throughput of the table.

    Here’s how you add a new index using the AWS Management Console. First, select the table and click on Create Index:

    Then enter the details (you can use a hash key or a combination of a hash key and a range key):

    The index will be created and ready to go before too long (the exact time depends on the number of items in the table and the amount of provisioned capacity). You can also delete indexes that you no longer need. All of this functionality is also available through DynamoDB’s UpdateTable API.

    There is no extra charge for this feature. However, you may need to provision additional write throughput in order to allow for the needs of the index creation process. You’ll pay the usual DynamoDB price for storage of the Global Secondary Indexes that you create.

    Purchasing Reserved Capacity
    DynamoDB’s unique provisioned capacity model makes it easy for you to build applications that can scale to any desired level of throughput. Instead of having to worry about adding hardware, tuning software, or rearchitecting your application as traffic grows, you can simply provision additional read or write capacity. The provisioning model even allows you to add capacity in anticipation of high traffic (perhaps your application is busiest during local business hours) and to remove it when it is not needed. This model allows you to create a cost structure that closely mirrors actual usage of your application and avoids unnecessary charges for idle resources.

    In situations where you have enough confidence in your usage model and your predictions for growth over time, you can reduce your DynamoDB costs even more by purchasing Reserved Capacity for a one or a three year term. After you pay the upfront fee, you will be billed monthly for the amount of capacity that you purchase. By purchasing capacity up front, you will save 53% (one year term) or 76% (three year term) over the regular hourly rates.

    In order to make Reserved Capacity accessible to more DynamoDB users, we have made two important changes. First, we have simplified the purchase process and made it accessible from within the Console. Second, we have reduced the minimum purchase to just 100 read or write capacity units. To purchase Reserved Capacity within a particular AWS region, open up the Console, choose the region, and click on the Reserved Capacity button:

    Select the amount of read and/or write capacity that you need (in units of 100), choose a term, and fill in your email address:

    Your purchases are visible in the Console:

    You can read more about this feature in the recent post, On DynamoDB Provisioning: Simple, Flexible, and Affordable, in the AWS Startup Collection.

    From our Customers
    AWS customer Eddie Dingels (Lead Architect for Earth Networks) is already taking advantage of on-the-fly indexing and the new pricing model! In his words:

    With online indexing, we can re-index tables to run new queries whenever we want. DynamoDB handles consistently changing the index while taking live traffic without a performance impact even on large data sets.

    He’s also saving money:

    DynamoDB has a very simple and innovative approach to database provisioning, it is truly pay as you go. Reserved capacity ends up dropping DynamoDB throughput costs by up to 76%, and today’s announcement makes it easier than ever for us to perform incremental purchases as we grow.

    The new Reserved Capacity pricing model is available today in all regions. Online indexing is available today in the Asia Pacific (Tokyo), Asia Pacific (Singapore), Europe (Ireland), US East (Northern Virginia), US West (Oregon), and US West (Northern California) regions.  We expect to make it available in the Europe (Germany), South America (Brazil), Beijing (China), and AWS GovCloud (US) regions within a week or so.


    PS  – Some of our developers put together a new article to show you how to Build a Mars Rover Application With DynamoDB. The code in this article takes advantage of the new JSON support and is a great way to exercise DynamoDB’s expanded free tier.

  • AWS Week in Review – January 19, 2015

    by Jeff Barr | on | Week in Review |

    Let’s take a quick look at what happened in AWS-land last week:

    Monday, January 19
    Tuesday, January 20
    Wednesday, January 21
    Thursday, January 22
    Friday, January 23

    Coming Soon:

    Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


  • New Action Links for AWS Trusted Advisor

    by Jeff Barr | on | AWS Trusted Advisor |

    AWS Trusted Advisor inspects your AWS environment and looks for opportunities to save money, increase performance & reliability, and to help close security gaps. Today we are enhancing Trusted Advisor with the addition of Action Links. You can now click on an item in a Trusted Advisor alert to navigate to the appropriate part of the AWS Management Console. For example, I ran the Trusted Advisor on my own AWS account and it displayed the following alert:

    I decided to fix the problem and activated an Action Link to head on over to the RDS section of the Console. From there I right-clicked to add a Read Replica:

    These new links are available now and you can click on them today!

    For Tool Vendors
    If you build applications that link (or could link) to the Console, you can use the same URLs. Here are a few to get you started (all of the links are relative to the base URL of the console):

    • EC2 Reserved Instance Purchase –  ec2/home?region={region}#ReservedInstances
    • EC2 Instances – ec2/home?region={region}#Instances:search={search_string}
    • Elastic Load Balancer – ec2/home?region={region}#LoadBalancers:search={search_string}
    • EBS Volumes – ec2/home?region={region}#Volumes:search={search_string}
    • Elastic IP Addresses – vpc/home?region={region}#eips:filter={filter_string}
    • RDS Database Instances – rds/home?#dbinstance:id=dbInstanceId
    • Auto Scaling Configuration – ec2/autoscaling/home?#LaunchConfigurations:id=LaunchConfigurationName

    There is a chance that these links will change in the future as the console continues to evolve. If you decide to make use of them, please plan for that eventuality in your application.


  • Deploy a Hybrid Storage Solution Using Avere’s Edge Filer and Amazon S3

    by Jeff Barr | on | Amazon S3, Enterprise |

    Enterprise-scale AWS customers often ask me for advice on how to connect their existing on-premises compute and storage infrastructure to the AWS cloud.  They are not interested in all-or-nothing solutions that render their existing IT model obsolete. Instead, they want to enhance the model by taking advantage of the security, scale, performance, and cost-effectiveness of the cloud.

    One of the most interesting connection points is storage. With enterprise storage requirements growing at a rapacious pace, the seemingly limitless capacity of the cloud, coupled with the pay-as-you-go cost model, becomes a very attractive option.

    In order to meet this need, we have worked with AWS storage competency partner Avere Systems to create a solution bundle. This bundle will enable enterprises to quickly deploy and evaluate an end-to-end hybrid storage “on-ramp” with a minimal investment.

    Special Offer from Avere and AWS
    As part of a limited-time offer, new Avere customers in the US and the UK who meet the qualifications can purchase a three-pack (for high availability) of Avere’s FXT 3200 Edge filer appliances (15 TB total capacity) for $60,000.  The package includes unlimited capacity NAS core software,  the FlashMove data migration software, one year of FlashCloud capacity-based software, one year of hardware and software support, and Avere installation services.

    To make this offer even sweeter, qualified customers are also eligible for up to $10,000 in Amazon Simple Storage Service (S3) storage credits.

    If you are interested in learning more, click here.

    Avere & ITMI at re:Invent
    At last year’s AWS re:Invent conference, Avere, AWS, and the Inova Translational Medical Institute (the largest health care system in Northern Virginia) discussed their use of this system to bring about their vision of precise, personalized medicine using a hybrid cloud. Here’s the video:

    As part of their treatment model for at-risk newborns, they routinely sequence and analyze the genes of the infant and the parents (which they call trio-based data). This allows them to enhance and customize their treatment, while generating terabytes of data. Some of this data is archived and used to build predictive models and to inform longitudinal studies that can look back up to 18 years.


  • System Center Virtual Machine Manager Add-In Update – Import & Launch Instances

    by Jeff Barr | on | Amazon EC2, Windows |

    We launched the AWS Systems Manager for Microsoft System Center Virtual Machine Manager (SCVMM) last fall. This add-in allows you to monitor and manage your on-premises VMs (Virtual Machines), as well as your Amazon Elastic Compute Cloud (EC2) instances (running either Windows or Linux) from within Microsoft System Center Virtual Machine Manager. As a refresher, here’s the main screen:

    Today we are updating this add-in with new features that allow you to import existing virtual machines and to launch new EC2 instances without having to use the AWS Management Console.

    Import Virtual Machines
    Select an existing on-premises VM and choose Import to Amazon EC2 from the right-click menu. The VM must be running atop the Hyper-V hypervisor and it must be using a VHD (dynamically sized) disk no larger than 1 TB. These conditions, along with a couple of others, are verified as part of the import process. You will need to specify the architecture (32-bit or 64-bit) in order to proceed:

    Launch EC2 Instances
    Click on the Create Instance button to launch a new EC2 instance. Select the region and an AMI (Amazon Machine Image), an instance type, and a key pair:

    You can click on Advanced Settings to reveal additional options:

    Click on the Create button to launch the instance.

    Available Now
    This add-in is available now (download it at no charge) and you can start using it today!


  • Remembering AWS Evangelist Mike Culver

    by Jeff Barr | on | Personal |

    Earlier today, after a hard-fought battle with pancreatic cancer that lasted nearly two years, my friend and colleague Mike Culver passed away leaving his wife and adult children behind.

    Mike was well known within the AWS community. He joined my team in the spring of 2006 and went to work right away. Using his business training and experience as a starting point, he decided to make sure that his audiences understood that the cloud was as much about business value as it was about mere bits and bytes. We shared an office during the early days. One day he drew a rough (yet functionally complete) draft of the following chart on our white board:

    As you can see, Mike captured the business value of cloud computing in a single diagram. I have used this diagram in hundreds of presentations since that time, as have many of my colleagues.

    Mike thoroughly enjoyed his role as an evangelist. To quote from his LinkedIn profile:

    There is nothing more exciting than telling the world about the amazing things that they can do with Amazon Web Services. So it was easy to travel the world, telling anyone who would listen, about this new thing known as “the cloud.”

    After almost four intense years as an AWS evangelist, Mike turned his attention to some new challenges that arose as the organization grew. He managed Strategic Alliances and Partner Training, and retired in the Spring of 2014 after serving as a Professional Services Consultant for over two years. It was difficult for Mike to retire but he had no choice due to his failing health. To quote from his farewell email:

    It’s the only job I’ve ever had where I set the alarm for 5:30 AM and still wake up early to get to the office. So it is going to be super tough to go to “you can’t work” cold turkey.

    As I noted earlier, Mike and I shared an office for several years. We found that we had a lot in common – a low tolerance for nonsense, a passion for evangelism, and a strong understanding of the value of good family ties. I learned a lot from listening to him and by watching him work. Even though I nominally managed him, I really did nothing more than sign his expense reports and take care of his annual reviews. He knew what had to be done, and he did it without bragging. End of story.

    In addition to his work at Amazon, Mike had the time and the energy to serve on the Advisory Board for the Cloud Computing program offered by University of Washington’s department of Professional and Continuing Education. Even as his health flagged and travel became difficult, Mike showed up for every meeting and forcefully (yet with unfailing politeness) argued for his position.

    Two weeks ago I was on a conference call with Mike and one of my AWS colleagues. Even though he was officially retired, heavily medicated, debilitated from his cancer, and near the end of his journey, he still refused to give up and continued to advocate for a stronger AWS presence in some important markets.

    Mike had a lifelong passion for aviation and owned a shiny silver 1947 Luscombe 8E for many years. Although I never had the opportunity to fly with him, it was clear from our conversations that he would be calm, cool, and collected as a pilot, regardless of the situation. Sadly, Mike’s health began to fail before he was able to finish assembling the kit plane (a Van’s RV-9) that he had started working on a couple of years earlier. Although the words “kit” and “plane” don’t always instill confidence when used together, Mike’s well-documented craftsmanship was clearly second to none and I would have been honored to sit beside him.

    To Mike’s wife and children, I can tell you that he loved you all very much, perhaps more than he ever told you. Your names often came up in our conversations and his affection for each and every one of you was obvious. He worried about you, he thought about you, he was proud of your accomplishments, and he wanted nothing but the best for you.

    I’m not sure what else I can tell you about Mike. He was an awesome guy and it was a privilege to be able to work side-by-side with him. Rest in peace, my good friend. You will be missed by everyone who knew you.



  • Amazon Cognito Update – Sync Store Access, Improved Console, More

    by Jeff Barr | on | Amazon Cognito |

    We’ve made some important updates to Amazon Cognito! As you may already know, this service makes it easy for you to save user data such as app preferences or game state in the AWS cloud without writing any backend code or managing any infrastructure.

    Here’s what’s new:

    1. Developer-oriented access to the sync store.
    2. Updated AWS console interface for developers.
    3. Identity pools role association.
    4. Simplified SDK initialization.

    Let’s dive in!

    Developer-Oriented Access to the Sync Store
    The Cognito sync store lets you save end-user data in key-value pairs. The data is associated with a Cognito identity so that it can be accessed across logins and devices.  The Cognito Sync client (available in the AWS Mobile SDK) uses temporary AWS credentials vended by the Security Token Service. The credentials give the client the ability to access and modify the data associated with a single Cognito identity.

    This level of access is perfect for client apps, since they are operating on behalf of a single user. It is, however, insufficiently permissive for certain interesting use cases. For example, game developers have told us that they would like to run backend processes to award certain users special prizes by modifying the data in the user’s Cognito profile.

    To enable this use case, we are introducing developer-oriented access to the Cognito sync store. Developers can now use their AWS credentials (including IAM user credentials) to gain read and write access to all identities in the sync store.

    The detailed post on the AWS Mobile Development Blog contains sample code that shows you how to make use of this new feature.

    Updated AWS Console Interface
    On a related note, the AWS Management Console now allows you to view and search (by Identity ID) all of the identities associated with any of your Cognito identity pools:

    You can also view and edit their profile data from within the Console:

    Identity Pool Role Association
    The updated console also simplifies the creation of IAM roles that are configured to access a particular identity pool. Simply choose Create a new IAM Role when you create a new identity pool (you can click on View Policy Document if you would like to see how the role will be configured):

    Cognito saves the selected roles and associates them with the pool. This gives Cognito the information that it needs to have in order to be able to show you the “Getting Started” code at any time:

    Even better, it also simplifies SDK initialization!

    Simplified SDK Initialization
    Because Cognito now saves the roles associated with a pool, you can now initialize the SDK without passing the ARNs for the role. Cognito will automatically use the roles associated with the pool. This simplifies the initialization process and also allows Cognito to call STS on your behalf, avoid an additional network call from the device in the process.

    Available Now
    These new features are available now and you can start using them today! Read the Cognito documentation to learn more and to see how to get started.

    — Jeff;