A few months ago I wrote about the new Your User Pools feature for Amazon Cognito. As I wrote at the time, you can use this feature to easily add user sign-up and sign-in to your mobile and web apps. The fully managed user directories can scale to hundreds of millions of users and you can have multiple directories per AWS account. Creating a user pool takes just a few minutes and you can decide exactly which attributes (address, email, gender, phone number, and so forth, plus custom attributes) must be entered when a new user signs up for your app or service. On the security side, you can specify the desired password strength, require the use of Multi-Factor Authentication (MFA), and verify new users via phone number or email address.
Now Generally Available
We launched Your User Pools as a public beta and received lots of great feedback. Today we are making Your User Pools generally available and we are also adding a large collection of new features:
- Device Remembering – Cognito can remember the devices that each user signs in from.
- User Search – Search for users in a user pool based on an attribute.
- Customizable Email Addresses – Control the email addresses for emails to users in your user pool.
- Attribute Permissions – Set fine-grained permissions for each user attribute.
- Custom Authentication Flow – Use new APIs and Lambda triggers to customize the sign-in flow.
- Admin Sign-in – Your app can now sign in users from backend servers or Lambda functions.
- Global Sign-out – Allow a user to sign out from all signed-in devices or browsers.
- Custom Expiration Period – Set an expiration period for refresh tokens.
- API Gateway Integration – Use user pool to authorize Amazon API Gateway requests.
- New Regions – Cognito Your User Pools are now available in additional AWS Regions.
Let’s take a closer look at each of these new features!
Cognito can now remember the set of devices used by (signed in from) each user. You, as the creator of the user pool, have the option to allow your users to request this behavior. If you have enabled MFA for a user pool, you can also choose to eliminate the need for entry of an MFA code on a device that has been remembered. This simplifies and streamlines the login process on a remembered device, while still requiring entry of an MFA code for unrecognized devices. You can also list a user’s devices and allow them to sign out from a device remotely.
You can enable and customize this feature when you create a new user pool; you can also set it up for an existing pool. Here’s how you enable and customize it when you create a new user pool. First you enable the feature by clicking on Always or User Opt-in:
Then you indicate whether you would like to suppress MFA on remembered devices:
You, as the creator of a Your User Pool, can now search for users based on a user attribute such as
Customizable Email Addresses
You can now specify the From and the Reply-To email addresses that are used to communicate with your users. Here’s how you specify the addresses when you create a new pool:
You can now set per-app read and write permissions for each user attribute. This gives you the ability to control which applications can see and/or modify each of the attributes that are stored for your users. For example, you could have a custom attribute that indicates whether a user is a paying customer or not. Your apps could see this attribute but could not modify it directly. Instead, you would update this attribute using an administrative tool or a background process. Permissions for user attributes can be set from the Console, the API, or the CLI.
Custom Authentication Flow
You can now use a pair of new API functions (
RespondToAuthChallenge) and three new Lambda triggers to create your own sign-in flow or to customize the existing one. You can, for example, customize the user flows for users with different levels of experience, different locations, or different security requirements. You could require the use of a CAPTCHA for some users or for all users, as your needs dictate.
The new Lambda triggers are:
Define Auth Challenge – Invoked to initiate the custom authentication flow.
Create Auth Challenge – Invoked if a custom authentication challenge has been defined.
Verify Auth Challenge Response – Invoked to check the validity of a custom authentication challenge.
You can set up the triggers from the Console like this:
You can now give your users the option to sign out (by invalidating tokens) of all of the devices where they had been signed in. Apps can call the [GlobalSignOut] function using a valid, non-expired, non-revoked access token. Developers can remotely sign out any user by calling the [AdminUserGlobalSignOut] function using a Pool ID and a username.
Custom Expiration Period
Cognito sign-in makes use of “refresh” tokens to eliminate the need to sign in every time an application is opened. By default, the token expires after 30 days. In order to give you more control over the balance between security and convenience, you can now set a custom expiration period for the refresh tokens generated by each of your user pools.
API Gateway Integration
Cognito user pools can now work hand-in-hand with Amazon API Gateway to authorize API requests. You can configure API Gateway to accept Id tokens to authorize users based on their presence in a user pool.
To do this, you first create a Cognito User Pool Authorizer using the API Gateway Console, referencing the user pool and choosing the request header that will contain the identity token:
Navigate to the desired method and select the new Authorizer:
As part of today’s launch we are making Cognito available in the US West (Oregon) Region.
In addition to the existing availability in the US East (Northern Virginia) Region, we are making Your User Pools available in the Europe (Ireland), US West (Oregon), and Asia Pacific (Tokyo) Regions.
These new features are available now and you can start using them today!
As I wrote earlier this year, AWS Application Discovery Service is designed to help you to dig in to your existing environment, identify what’s going on, and provide you with the information and visibility that you need to have in order to successfully migrate your systems and applications to the cloud (see my post, New – AWS Application Discovery Service – Plan Your Cloud Migration, for more information).
The discovery process described in my blog post makes use of a small, lightweight agent that runs on each existing host. The agent quietly and unobtrusively collects relevant system information, stores it locally for review, and then uploads it to Application Discovery Service across a secure connection on port 443. The information is processed, correlated, and stored in an encrypted repository that is protected by AWS Key Management Service (KMS).
In virtualized environments, installing the agent on each guest operating system may be impractical for logistical or other reasons. Although the agent runs on a fairly broad spectrum of Windows releases and Linux distributions, there’s always a chance that you still have older releases of Windows or exotic distributions of Linux in the mix.
New Agentless Discovery
In order to bring the benefits of AWS Application Discovery Service to even more AWS customers, we are introducing a new, agentless discovery option today.
If you have virtual machines (VMs) that are running in the VMware vCenter environment, you can use this new option to collect relevant system information without installing an agent on each guest. Instead, you load an on-premises appliance into vCenter and allow it to discover the guest VMs therein.
The vCenter appliance captures system performance information and resource utilization for each VM, regardless of what operating system is in use. However, it cannot “look inside” of the VM and as such cannot figure out what software is installed or what network dependencies exist. If you need to take a closer look at some of your existing VMs in order to plan your migration, you can install the Application Discovery agent on an as-needed basis.
Like the agent-based model, agentless discovery gathers information and stores it locally so that you can review it before it is sent to Application Discovery Service.
After the information has been uploaded, you can explore it using the AWS Command Line Interface (CLI). For example, you can use the
describe-configurations command to learn more about the configuration of a particular guest:
You can also export the discovered data in CSV form and then use it to plan your migration. To learn more about this feature, read about the
Getting Started with Agentless Discovery
To get started, sign up here and we’ll provide you with a link to an installer for the vCenter appliance.
Regular readers of this blog will know that I am a big fan of Amazon Relational Database Service (RDS). As a managed database service, it takes care of the more routine aspects of setting up, running, and scaling a relational database.
We first launched support for SQL Server in 2012. Continuing our effort to add features that have included SSL support, major version upgrades, transparent data encryption, enhanced monitoring and Multi-AZ, we have now added support for SQL Server native backup/restore.
SQL Server native backups include all database objects: tables, indexes, stored procedures and triggers. These backups are commonly used to migrate databases between different SQL Server instances running on-premises or in the cloud. They can be used for data ingestion, disaster recovery, and so forth. The native backups also simplify the process of importing data and schemas from on-premises SQL Server instances, and will be easy for SQL Server DBAs to understand and use.
Support for Native Backup/Restore
You can now take native SQL Server database backups from your RDS instances and store them in an Amazon S3 bucket. Those backups can be restored to an on-premises copy of SQL Server or to another RDS-powered SQL Server instance. You can also copy backups of your on-premises databases to S3 and then restore them to an RDS SQL Server instance. SQL Server Native Backup/Restore with Amazon S3 also supports backup encryption using AWS Key Management Service (KMS) across all SQL Server editions. Storing and transferring backups in and out of AWS through S3 provides you with another option for disaster recovery.
You can enable this feature by adding the SQL_SERVER_BACKUP_RESTORE option to an option group and associating the option group with your RDS SQL Server instance. This option must also be configured with your S3 bucket information and can include a KMS key to encrypt the backups.
Start by finding the desired option group:
Then add the SQL_SERVER_BACKUP_RESTORE option, specify (or create) an IAM role to allow RDS to access S3, point to a bucket, and (if you want) specify and configure encryption:
After you have set this up, you can use SQL Server Management Studio to connect to the database instance and invoke the following stored procedures (available within the msdb database) as needed:
rds_backup_database– Back up a single database to an S3 bucket.
rds_restore_database– Restore a single database from S3.
rds_task_status– Track running backup and restore tasks.
rds_cancel_task– Cancel a running backup or restore task.
To learn more, take a look at Importing and Exporting SQL Server Data.
SQL Server Native Backup/Restore is now available in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Mumbai), and South America (Brazil) Regions. There are no additional charges for using this feature with Amazon RDS for SQL Server, however, the usage of the Amazon S3 storage will be billed at regular rates.
Today I would like to introduce a very special guest blogger! My daughter Tina is a Recruiting Coordinator for the AWS team and is making her professional blogging debut with today’s post.— Jeff;
It’s officially summer and it’s hot! Check out this month’s hot AWS-powered startups:
- Depop – a social mobile marketplace for artists and friends to buy and sell products.
- Nextdoor – building stronger and safer neighborhoods through technology.
- Branch – provides free deep linking technology for mobile app developers to gain and retain users.
In 2011, Simon Beckerman and his brother, Daniel, set out to create a social, mobile marketplace that would make buying and selling from mobile a fun and interactive experience. The Depop founders recognized that the rise of m-commerce was changing the way that consumers wanted to communicate and interact with each other. Simon, who already ran PIG Magazine and the luxury eyewear brand RetroSuperFuture, wanted to create a space where artists and creatives like himself could share, buy and sell their possessions. After launching organically in Italy, Depop moved to Shoreditch, London in 2012 to establish its headquarter and has since grown considerably with offices in London, New York, and Milan.
With over 4 million users worldwide, Depop is growing and building a community of shop owners with a passion for fashion, music, art, vintage, and lifestyle pieces. The familiar and user-friendly interface allows users to follow, like, comment, and private message with other users and shop owners. Simply download the app (Android or iOS) and you are suddenly connected to millions of unique items ready for purchase. It’s not just clothes either – you can find home décor, vintage furniture, jewelry, and more. Filtering by location allows you to personalize your feed and shop locally for even more convenience. Buyers can scroll through an endless stream of items ready for purchase and have the option to either pick up in-person or have their items shipped directly to them. Selling items is just as easy – upload a photo, write a short description, set a price, and then list your product.
Depop chose AWS in order to move fast without needing a large operations team, following a DevOps approach. They use 12 distinct AWS services including Amazon S3 and Amazon CloudFront for image hosting, and Auto Scaling to deal with the unpredictable and fairly large changes in traffic throughout the day. Depop’s developers are able to support their own services in production without needing to call on a dedicated operations team.
Check out Depop’s Blog to keep up with the artists using the app!
Nextdoor (San Francisco)
Based in San Francisco, Nextdoor has helped more than 100,000 neighborhoods across the United States bring their communities closer together. In 2010, the founders of this startup were surprised to learn from a Pew research study that the majority of American adults knew only some (29%) or none (28%) of their neighbors by name. Recognizing an opportunity to bring back a sense of community to neighborhoods across the country, the idea for Nextdoor was born. Neighbors are using Nextdoor to ask questions, get to know one another, and exchange local advice and recommendations. For example, neighbors are able to help one another to:
- Find trustworthy babysitters, plumbers, and dentists in the area.
- Organize neighborhood events, such as garage sales and block parties.
- Get assistance to find lost pets and missing packages.
- Sell or give away items, like an old kitchen table or bike.
- Report neighborhood crime and share safety concerns.
Nextdoor is also giving local agencies such as police and fire departments, and offices of emergency management the ability to connect with verified residents in their jurisdiction through a feature called Nextdoor for Public Agencies. This is incredibly beneficial for agencies to help residents with emergency preparedness, community engagement, crime prevention, and community policing. In his seminal work, Bowling Alone, Harvard Professor Robert Putnam learned that when social capital within a community is high, children do better in school, neighborhoods are safer, people prosper, the government is better, and people are happier and healthier overall. With a comprehensive list of helpful community guidelines, Nextdoor is creating stronger and safer neighborhoods with the power of technology. You can download the Nextdoor app for Android or iOS.
AWS is the foundational infrastructure for both the online services in Nextdoor’s technology stack, and all of their offline data processing and analytics systems. Nextdoor uses over 25 different AWS services (Amazon EC2, Elastic Load Balancing, Amazon Cloudfront, Amazon S3, Amazon DynamoDB, Amazon Redshift, and Amazon Kinesis to name a few) to quickly prototype, develop, and deploy new features for community members. Supporting millions of users in the US, Nextdoor runs their services across four AWS Regions worldwide, and has also recently expanded to Europe. In their own words, “Amazon makes it easy for us to flexibly grow our technology footprint with predictable costs in an automated fashion.”
Branch (Palo Alto)
The idea for Branch came in May 2014 when a group of Stanford business school graduates began working together to build and launch their own mobile app. They soon realized how challenging it was to grow their app, and saw that many of their friends were running into the same difficulties. The graduates saw the potential to create a deep linking platform to help apps get discovered, retain users, and grow exponentially. Branch reached its first million users within several months after its inception, and a little over a year later had climbed to one billion users and 5,000 apps. Companies such as Pinterest, Instacart, Mint, and Redfin are partnering with Branch to improve their user experience worldwide. Over 11,000 apps use the platform today.
As the number of smartphone users continues to increase, mobile apps are providing better user experiences, higher conversions, and better retention rates than the mobile web. The issue comes when mobile developers want to link users to the content they worked so hard to create – the transition between emails, ads, referrals, and more can often lead to broken experiences.
Mobile deep links allow users to share content that is within an app. Normal web links don’t work unless apps are downloaded on a device, and even then there is no standard way to find and share content as it is specific to every app. Branch allows content within apps to be shared just as they would be on the web. For example, imagine you are shopping for a fresh pair of shoes on the mobile web. You are ready to check out, but are prompted to download the store’s app to complete your purchase. Now that you’ve downloaded the app, you are brought back to the store’s homepage and need to restart your search from the beginning. With a Branch deep link, you instead would be linked directly back to checkout once you’ve installed the app, saving time and creating an overall better user experience.
Branch has grown exponentially over the past two years, and relies heavily on AWS to scale its infrastructure. Anticipating continued growth, Branch builds and maintains most of its infrastructure services with open source tools running on Amazon EC2 instances (Amazon API Gateway, Apache Kafka, Apache Zookeeper, Kubernetes, Redis, and Aerospike), and also use AWS services such as Elastic Load Balancing, Amazon CloudFront, Amazon Route 53, and Amazon RDS for PostgreSQL. These services allow Branch to maintain a 99.999% success rate on links with a latency of only 60 ms in the 99th percentile. To learn more about how they did this, read their recent blog post, Scaling to Billions of Requests a Day with AWS.
The second annual Prime Day was another record-breaking success for Amazon, surpassing global orders compared to Black Friday, Cyber Monday and Prime Day 2015.
According to a report published by Slice Intelligence, Amazon accounted for 74% of all US consumer e-commerce on Prime Day 2016. This one-day only global shopping event, exclusively for Amazon Prime members saw record-high levels of traffic including double the number of orders on the Amazon Mobile App compared to Prime Day 2015. Members around the world purchased more than 2 million toys, more than 1 million pairs of shoes and more than 90,000 TVs in one day (see Amazon’s Prime Day is the Biggest Day Ever for more stats). An event of this scale requires infrastructure that can easily scale up to match the surge in traffic.
The Amazon retail site uses a fleet of EC2 instances to handle web traffic. To serve the massive increase in customer traffic for Prime Day, the Amazon retail team increased the size of their EC2 fleet, adding capacity that was equal to all of AWS and Amazon.com back in 2009. Resources were drawn from multiple AWS regions around the world.
The morning of July 11th was cool and a few morning clouds blanketed Amazon’s Seattle headquarters. As 8 AM approached, the Amazon retail team was ready for the first of 10 global Prime Day launches. Across the Pacific, it was almost midnight. In Japan, mobile phones, tablets, and laptops glowed in anticipation of Prime Day deals. As traffic began to surge in Japan, CloudWatch metrics reflected the rising fleet utilization as CloudFront endpoints and ElastiCache nodes lit up with high-velocity mobile and web requests. This wave of traffic then circled the globe, arriving in Europe and the US over the course of 40 hours and generating 85 billion clickstream log entries. Orders surpassed Prime Day 2015 by more than 60% worldwide and more than 50% in the US alone. On the mobile side, more than one million customers downloaded and used the Amazon Mobile App for the first time.
As part of Prime Day, Amazon.com saw a significant uptick in their use of 38 different AWS services including:
- Analytics – Amazon Redshift, Amazon Machine Learning.
- Application Services – Amazon API Gateway, CloudSearch, Data Pipeline, Elastic Transcoder, SES, SNS, SQS, SWF.
- Compute – EC2, Auto Scaling, EBS, EMR, Lambda.
- Database – DynamoDB, ElastiCache, Kinesis, Kinesis Firehose, RDS.
- Management Tools – CloudTrail, CloudWatch, Trusted Advisor.
- Mobile – Mobile Analytics.
- Networking –Direct Connect, Directory Service, Virtual Private Cloud, Route 53.
- Security & Identity – CloudHSM, IAM, KMS.
- Storage & Content Delivery – CloudFront, S3, Amazon Glacier.
To further illustrate the scale of Prime Day and the opportunity for other AWS customers to host similar large-scale, single-day events, let’s look at Prime Day through the lens of several AWS services:
- Amazon Mobile Analytics events increased 1,661% compared to the same day the previous week.
- Amazon’s use of CloudWatch metrics increased 400% worldwide on Prime Day, compared to the same day the previous week.
- DynamoDB served over 56 billion extra requests worldwide on Prime Day compared to the same day the previous week.
Running on AWS
The AWS team treats Amazon.com just like any of our other big customers. The two organizations are business partners and communicate through defined support plans and channels. Sticking to this somewhat formal discipline helps the AWS team to improve the support plans and the communication processes for all AWS customers.
Running the Amazon website and mobile app on AWS makes short-term, large scale global events like Prime Day technically feasible and economically viable. When I joined Amazon.com back in 2002 (before the site moved to AWS), preparation for the holiday shopping season involved a lot of planning, budgeting, and expensive hardware acquisition. This hardware helped to accommodate the increased traffic, but the acquisition process meant that Amazon.com sat on unused and lightly utilized hardware after the traffic subsided. AWS enables customers to add the capacity required to power big events like Prime Day, and enables this capacity to be acquired in a much more elastic, cost-effective manner. All of the undifferentiated heavy lifting required to create an online event at this scale is now handled by AWS so the Amazon retail team can focus on delivering the best possible experience for its customers.
The Amazon retail team was happy that Prime Day was over, and ready for some rest, but they shared some of what they learned with me:
- Prepare – Planning and testing are essential. Use historical metrics to help forecast and model future traffic, and to estimate your resource needs accordingly. Prepare for failures with GameDay exercises – intentionally breaking various parts of the infrastructure and the site in order to simulate several failure scenarios (read Resilience Engineering – Learning to Embrace Failure to learn more about GameDay exercises at Amazon).
- Automate – Reduce manual efforts and automate everything. Take advantage of services that can scale automatically in response to demand – Route53 to automatically scale your DNS, Auto Scaling to scale your EC2 capacity according to demand, and Elastic Load Balancing for automatic failover and to balance traffic across multiple regions and availability zones (AZs).
- Monitor – Use Amazon CloudWatch metrics and alarms liberally. CloudWatch monitoring helps you stay on top of your usage to ensure the best experience for your customers.
- Think Big – Using AWS gave the team the resources to create another holiday season. Confidence in your infrastructure is what enables you to scale your big events.
As I mentioned before, nothing is stopping you from envisioning and implementing an event of this scale and scope!
I would encourage you to think big, and to make good use of our support plans and services. Our Solutions Architects and Technical Account Managers are ready to help, as are our APN Consulting Partners. If you are planning for a large-scale one-time event, give us a heads-up and we’ll work with you before and during the event.— Jeff;
PS – What did you buy on Prime Day?
We have some awesome webinars lined up for next week! As always, they are free but do often fill up, so go and and register. Here’s the lineup (all times are PT and each webinar runs for one hour):
- 9:00 AM – Mobile App Testing with AWS Device Farm.
- 10:30 AM – Amazon EC2 Masterclass.
- Noon – Getting Started with IoT.
- 9:00 AM – Intro to Elastic File System.
- 10:30 AM – Getting Started with Amazon Redshift.
- Noon – Running fast, interactive queries on petabyte datasets using Presto.
We launched EC2 Run Command late last year and have enjoyed seeing our customers put it to use in their cloud and on-premises environments. After the launch, we quickly added Support for Linux Instances, the power to Manage & Share Commands, and the ability to do Hybrid & Cross-Cloud Management. Earlier today we made EC2 Run Command available in the China (Beijing) and Asia Pacific (Seoul) Regions.
Our customers are using EC2 Run Command to automate and encapsulate routine system administration tasks. They are creating local users and groups, scanning for and then installing applicable Windows updates, managing services, checking log files, and the like. Because these customers are using EC2 Run Command as a building block, they have told us that they would like to have better visibility into the actual command execution process. They would like to know, quickly and often in detail, when each command and each code block in the command begins executing, when it completes, and how it completed (successfully or unsuccessfully).
In order to support this really important use case, you can now arrange to be notified when the status of a command or a code block within a command changes. In order to provide you with several different integration options, you can receive notifications via CloudWatch Events or via Amazon Simple Notification Service (SNS).
These notifications will allow you to use EC2 Run Command in true building block fashion. You can programmatically invoke commands and then process the results as they arrive. For example, you could create and run a command that captures the contents of important system files and metrics on each instance. When the command is run, EC2 Run Command will save the output in S3. Your notification handler can retrieve the object from S3, scan it for items of interest or concern, and then raise an alert if something appears to be amiss.
Monitoring Executing Using Amazon SNS
Let’s run up a command on some EC2 instances and monitor the progress using SNS.
Following the directions (Monitoring Commands), I created an S3 bucket (jbarr-run-output), an SNS topic (command-status), and an IAM role (RunCommandNotifySNS) that allows the on-instance agent to send notifications on my behalf. I also subscribed my email address to the SNS topic, and entered the command:
And specified the bucket, topic, and role (further down on the Run a command page):
I chose All so that I would be notified of every possible status change (In Progress, Success, Timed Out, Cancelled, and Failed) and Invocation so that I would receive notifications as the status of each instance chances. I could have chosen to receive notifications at the command level (representing all of the instances) by selecting Command instead of Invocation.
I clicked on Run and received a sequence of emails as the commands were executed on each of the instances that I selected. Here’s a sample:
In a real-world environment you would receive and process these notifications programmatically.
Monitoring Execution Using CloudWatch Events
I can also monitor the execution of my commands using CloudWatch Events. I can send the notifications to an AWS Lambda functioon, an SQS queue, or a Amazon Kinesis stream.
For illustrative purposes, I used a very simple Lambda function:
I created a rule that would invoke the function for all notifications issued by the Run Command (as you can see below, I could have been more specific if necessary):
I saved the rule and ran another command, and then checked the CloudWatch metrics a few seconds later:
I also checked the CloudWatch log and inspected the output from my code:
This feature is available now and you can start using it today.
Monitoring via SNS is available in all AWS Regions except Asia Pacific (Mumbai) and AWS GovCloud (US). Monitoring via CloudWatch Events is available in all AWS Regions except Asia Pacific (Mumbai), China (Beijing), and AWS GovCloud (US).— Jeff;
GameStop sells new and pre-owned video game hardware, software, and accessories, consumer electronics, and wireless services. With over 7,000 retail locations spread across 14 countries, the company sells to and interacts with millions of customers every day. In addition to their retail locations, they follow an omni-channel strategy and run a loyalty program with over 46 million members worldwide.
I spoke with Justin Newcom (Senior Director, International Technology Services & Support) and Jim March (Advanced Cloud Systems Engineer) of GameStop to learn how they moved their mission-critical multichannel marketing platform from traditional hosting to AWS. This is their story!
The Business Challenge
The story begins in March of 2015 when one of GameStop’s existing international hosting contracts was about to expire. The GameStop team decided to take a serious look at alternative hosting solutions. They sent out an RFP (Request For Proposal) to several traditional hosts and to some cloud vendors, including AWS. As the responses arrived, it became obvious, in Justin’s words, that “AWS was the clear winner.” Jim, after returning from a briefing held by another cloud vendor, dug in to AWS and found that it was far more mature and sophisticated than he had once thought.
They decided to move forward with AWS, basing their decision on the product, the pace of innovation, our reputation, and our pricing. However, even though they had picked the winner, they knew that they still had a lot to learn if they were going to have a successful journey.
The Journey Begins
The GameStop technology leaders decided to create a learning culture around AWS. They spoke with other AWS customers and partners, and ultimately brought in a prominent AWS Consulting Partner to accompany them on their cloud journey. They chose the mission-critical multichannel marketing platform as their first migration target. This platform goes beyond e-commerce, and manages all in-store customer activities in Canada and Europe, as well as online customer interaction. It integrates in-store and online activity; allowing, for example, customers to make an online purchase at the cash register.
The migration to AWS was complete in time for the 2015 holiday shopping season and AWS performed flawlessly. The first Black Friday was a turning point for GameStop. Even though they were not yet using Auto Scaling, they were able to quickly launch new EC2 instances in order to meet demand. The site remained up and responsive.
Early in the journey, some other initial successes proved to be important turning points. For example, the team had just four hours to prepare for a “surprise” launch of Nintendo’s Amiibo in Canada. The launch went off without a hitch. Another time, they spun up new infrastructure on AWS to deal with a special sales promotion that was scheduled to last for just six hours. This went well and cost them just $300 in AWS charges. In light of these early successes, internal teams were empowered to think about other high-impact, short-term marketing programs, including “spot” sales that would last for an hour or two. Jim told me that events of this type, once traumatic and expensive, were now “fun.”
Time for a Transformation
With the first migration successfully completed, the next step was to transform the IT organization, acquiring cloud skills and experience along the way, as they became the organization’s cloud infrastructure team. As part of this modernization, they made sure that their team was gaining experience with Agile and DevOps practices, along with new technologies such as microservices and containers. They brought in modern tools like Jira and Confluence, sought executive buy-in to take new approaches and to run some experiments, and arranged for a series of in-house courses. I should note that this is turning out to be a very common model among companies that are taking a big leap in to the future! In some cases the cloud begets the use of other modern practices; in others the use of modern practices begets the use of cloud.
With the transformation well under way, the team is now looking at all of ways that they can use AWS to improve efficiency and to save money. They anticipate becoming a different type of internal IT supplier, with the ability to form strong internal partnerships, provide better purchasing advice, and to assist teams that have varying levels of IT expertise. Costs have gone down, predictability has gone up, and they are now positioned to build and deploy innovative solutions that were not feasible in the past.
GameStop is now looking to consolidate their international IT infrastructure resources, some of which are housed in “data rooms” (not quite data centers) in disparate non-US locations. They see AWS as a single platform to develop against, and have instituted a common model that can be replicated across locations, business units, and applications. They are no longer buying new hardware. Instead, as the hardware reaches the end of its useful life the functionality is moved to AWS and the data room is emptied out. At the present pace, all eight of the data rooms will be empty within three years.
Migrating to and Using AWS
Migration is generally a two-stage process for the GameStop international teams. In the first stage they lift-and-shift the current application to the cloud. In the second, they refactor and optimize in pursuit of additional efficiency and better maintainability. Before the migration the multichannel team saw IT served via third-party partners as a bottleneck. After the move to AWS the relationship improved and the teams were able to cooperatively work toward solutions.
During the refactoring phase they take a look at every aspect of the existing operation and decide how they can replace existing functionality with a modern AWS alternative. This includes database logic, network architecture, security, backups, internal messaging, and monitoring.
The team is intrigued by the serverless processing model and plans to use AWS Lambda and Amazon API Gateway to rebuild their internal service architecture, replacing an older and less flexible technology stack in the process. They are also planning to route all logs and metrics to Amazon CloudWatch for storage and analysis, with a goal of making them fully searchable.
The migration is still a work in progress and there’s still more work to be done. Some of the EC2 instances are still treated as pets rather than as cattle; the goal is to get to a model where all of the infrastructure is dynamic and disposable, and where logging in to a server to check status or to make a change is a rarity.
I asked Justin and Jim for advice and recommendations they could make to other organizations that are contemplating a move to the cloud. This is what they told me:
- Go all-in on automation. Expect it and build for it.
- Treat infrastructure as code. Take the migration as an opportunity to create a culture that embraces this practice.
- Do everything right, from the beginning. Do not move an application that will cause you grief, simply to move it to the cloud. Choose your low-hanging fruit and spend your initial budget on what you know. Treat the migration as a learning process, but save money where you can.
- Don’t cave to time pressure. Communicate with your business partners. The cloud is new for everyone and there will be bumps in the road. Be open and transparent and explain why things take time.
- Ensure that the leadership team is all-in with the IT team. Having top-down buy-in from your management team is a must.
Jim also told me that he likes to think of the AWS Management Console‘s Launch Instance button as a form of technical debt that must be repaid with future automation.
I would like to thank Justin and Jim for their insights and to congratulate them on their work to move GameStop’s IT environment into the future!— Jeff;
After potential AWS customers see the benefits of moving to the cloud, they often ask about the best way to migrate their applications and their data, including large amounts of structured information stored in relational databases.
Today we are launching an important new feature for Amazon Aurora. If you are already making use of MySQL, either on-premises or on an Amazon EC2 instance, you can now create a snapshot backup of your existing database, upload it to Amazon S3, and use it to create an Amazon Aurora cluster. In conjunction with Amazon Aurora’s existing ability to replicate data from an existing MySQL database, you can easily migrate from MySQL to Amazon Aurora while keeping your application up and running.
This feature can be used to easily and efficiently migrate large (2 TB and more) MySQL databases to Amazon Aurora with minimal performance impact on the source database. Our testing has shown that this process can be up to 20 times faster than using the traditional
mysqldump utility. The database can contain both InnoDB and MyISAM tables; however, any MyISAM tables will be converted to InnoDB as part of the cluster creation process.
Here’s an outline of the migration process:
- Source Database Preparation – Enable binary logging in the source MySQL database and ensure that the logs will be retained for the duration of the migration.
- Source Database Backup – Use Percona’s Xtrabackup tool to create a “hot” backup of the source database. This tool does not lock database tables or rows, does not block transactions, and produces compressed backups. You can direct the tool to create one backup file or multiple smaller files; Amazon Aurora can accommodate either option.
- S3 Upload – Upload the backup to S3. For backups of 5 TB or less, a direct upload via the AWS Management Console or the AWS Command Line Interface (CLI) is generally sufficient. For larger backups, consider using AWS Import/Export Snowball.
- IAM Role – Create an IAM role that allows Amazon Relational Database Service (RDS) to access the uploaded backup and the bucket it resides within. The role must allow RDS to perform the
GetBucketLocationoperations on the bucket and the
GetObjectoperation on the backup (you can find a sample policy in the documentation).
- Create Cluster – Create a new Amazon Aurora cluster from the uploaded backup. Click on Restore Aurora DB Cluster from S3 in the RDS Console, enter the version number of the source database, point to the S3 bucket and choose the IAM role, then click on Next Step. Proceed through the remainder of the cluster creation pages (Specify DB Details and Configure Advanced Settings) in the usual way:
Amazon Aurora will process the backup files in alphabetical order.
- Migrate MySQL Schema – Migrate (as appropriate) the users, permissions, and configuration settings in the MySQL INFORMATION_SCHEMA.
- Migrate Related Items – Migrate the triggers, functions, and stored procedures from the source database to the new Amazon Aurora cluster.
- Initiate Replication – Begin replication from the source database to the new Amazon Aurora cluster and wait for the cluster to catch up.
- Switch to Cluster – Point all client applications at the Amazon Aurora cluster.
- Terminate Replication – End replication to the Amazon Aurora cluster.
Given the mission-critical nature of a production-level relational database, a dry run is always a good idea!
This feature is available now and you can start using it today in all public AWS regions with the exception of Asia Pacific (Mumbai). To learn more, read Migrating Data from an External MySQL Database to an Amazon Aurora DB Cluster in the Amazon Aurora User Guide.
My colleague Janna Pellegrino shared the guest post below to introduce you to a new set of AWS training bootcamps!— Jeff;
We’ve made four of our most popular Technical Bootcamps from AWS re:Invent and Summits part of our broader AWS Training portfolio so you can now attend a class convenient to you.
- Taking AWS Operations to the Next Level teaches you how to leverage AWS CloudFormation, Chef, and AWS SDKs to automate provisioning and configuration of AWS infrastructure resources and applications. We also cover how to work with AWS Service Catalog. This course is designed for solutions architects and SysOps administrators.
- Securing Next-Gen Applications at Cloud Scale teaches you how to use a DevSecOps approach to design and build robust security controls at cloud scale for next-generation workloads. We cover design considerations of operating high-assurance workloads on the AWS platform. Labs teach you governance, configuration management, trust-decision automation, audit artifact generation, and native integration of tasks into custom software workloads. This course is for security engineers, developers, solutions architects, and other technical security practitioners.
- Running Container-Enabled Microservices on AWS teaches you how to manage and scale container-enabled applications by using Amazon ECS. Labs teach you to use Amazon ECS to handle long-running services, build and deploy container images, link services together, and scale capacity to meet demand. This course is for developers, solutions architects, and system administrators.
- Building a Recommendation Engine on AWS teaches you to build a real-time analytics and geospatial search application using Amazon Elasticsearch Service, Amazon DynamoDB, DynamoDB Streams, Amazon API Gateway, AWS Lambda, and Amazon S3. We discuss a real-world location-aware social application that displays information generated from a model created with Amazon Machine Learning. We also cover best practices for processing and analyzing data, such as the lambda data processing pattern and automating development process, using Swagger, Grunt, and the AWS SDK. This course is for developers, solutions architects, and data scientists.
— Janna Pellegrino, AWS Training and Certification