AWS Government, Education, & Nonprofits Blog

The Michael J. Fox Foundation Accelerates Research to Cure Parkinson’s with Intel and AWS

on | in Nonprofit |

Parkinson’s disease (PD) — a neurodegenerative disorder that affects movement, cognition, mood and autonomic function — affects an estimated 5 million people worldwide. Because symptoms vary from individual to individual, research into the disease is further complicated by the lack of objective data. As is typical of many applications used for clinical research, the collection, storage, and analysis of data is complex, expensive, and time-consuming.

The Michael J. Fox Foundation for Parkinson’s Research (MJFF) is dedicated to finding a cure for Parkinson’s disease through an aggressively funded research agenda and to ensuring the development of improved therapies for those living with Parkinson’s today. As part of a research initiative to evaluate the use of wearable technology to measure and track Parkinson’s symptoms, MJFF partnered with Intel and is utilizing their big data analytics platform to run a number of research projects. This platform is hosted on AWS’s infrastructure, using various scalable big data and IoT technologies, to collect, process and store large streams of de-identified data from the smartphones and wearable devices of study participants.

“The Foundation is working in collaboration with AWS and Intel to ensure that we have a robust technology platform to run effective research studies. Partnering with AWS and Intel ensures that our data is stored securely and efficiently, and allows us to not have to worry about the IT components of the project and really focus on the objective at hand,” said Lauren Bataille, Senior Associate Director, Research Partnerships, The Michael J. Fox Foundation.

Research data is hosted on AWS and is made available to Parkinson’s researchers around the world via Intel’s platform. Through analysis, data may reveal new, useful insights about living with Parkinson’s disease.

“Today, the drug development pipeline for Parkinson’s is the best it has been in decades. And when you add the benefit of technology to enable us to complement that genetic information with phenotypic information, basically what it is like to live with the disease every day – that can be a game changer. These are the kinds of dovetailings that could catapult us into much faster progress,” said Deborah W. Brooks, Co-Founder and Executive Vice Chairman, The Michael J. Fox Foundation.

Watch this video to learn more about how the Foundation is using big data to gain new insights into Parkinson’s disease and accelerate a cure.

New AWS Training Bootcamps to Help You Build Technical Skills at the AWS Public Sector Summit

on | in Education, government, Nonprofit |

New to the AWS Public Sector Summit this year in Washington, DC, you can choose from four full-day bootcamps available on Monday, June 12th.

AWS Training Bootcamps are full-day training sessions that offer you a chance to learn about AWS services and solutions through immersive exercises and hands-on labs. Delivered by experienced AWS Instructors and Solution Architects, these bootcamps allow you to work directly with AWS knowledge experts to get your questions answered.

Choose from one of the four below:

  • AWS Technical Essentials – Audience Level: Introductory – AWS Technical Essentials is a one-day, introductory-level bootcamp that introduces you to AWS products, services, and common solutions. It provides you with fundamentals to become more proficient in identifying AWS services so that you can make informed decisions about IT solutions based on your business requirements and get started working on AWS. Learn more.
  • Secrets to Successful Cloud Transformations – Audience Level: Introductory – Secrets to Successful Cloud Transformations is a one-day, introductory-level bootcamp that teaches you how to select the right strategy, people, migration plan, and financial management methodology needed when moving your workloads to the cloud. This course provides guidance on how to build a holistic cloud adoption plan and how to hire people who will execute that plan. You will learn best practices for choosing workloads to migrate from your on-premises environment to AWS. In addition, you will also learn best practices for managing your AWS expenses and dealing with internal chargebacks. Learn more.  Note: This course focuses on the business, rather than the technical, aspects of cloud transformation.
  • Building a Serverless Data Lake – Audience Level: Advanced – Building a Serverless Data Lake is a one-day, advanced-level bootcamp designed to teach you how to design, build, and operate a serverless data lake solution with AWS services. The bootcamp will include topics such as ingesting data from any data source at large scale, storing the data securely and durably, enabling the capability to use the right tool to process large volumes of data, and understanding the options available for analyzing the data in near-real time. Learn more.
  • Running Container-Enabled Microservices on AWS – Audience Level: Expert – Running Container-Enabled Microservices on AWS is a one-day, expert-level bootcamp that provides an in-depth, hands-on introduction to managing and scaling container-enabled applications. This full-day bootcamp provides an overview of container and microservice architectures. You will learn how to containerize an example application and architect it according to microservices best practices. Hands-on labs that feature the AWS container-focused services show you how to schedule long-running applications and services, set up a software delivery pipeline for your microservices application, and implement elastic scaling of the application based on customer load. Learn more.

All students must bring their own devices (requires dual-core processor with 4GB of RAM). Each bootcamp is $600 and must be reserved in advance. Enter the code PSBOOT100 to get $100 off your ticket. Space is limited, save your spot!

How to Achieve AWS Cloud Compliance with AWS, Allgress, and CloudCheckr

on | in government |

Assessing and measuring compliance requirements can be a full-time job. To mitigate risks, organizations must plan for cloud-based risk treatments, reporting and alerts, and automated responses to maintain security and compliance, as well as modernize their governance at scale.

AWS and its Amazon Partner Network (APN) security partners are developing security and compliance tools to enable customer security capabilities and architecture approaches for meeting and implementing advanced security competencies on AWS.  For example, Allgress and CloudCheckr are working together to solve security and compliance challenges and provide greater transparency of what tool, service, and partner solutions should be used to manage security, continuously treat risk, and automate cloud services.

The Regulatory Product Mapping Tool (RPM) was developed to reduce complexities, increase speed, and shorten the timeframe to develop compliant architectures on AWS. The RPM tool interactively maps FedRAMP (NIST 800-53) controls to AWS services and APN solutions. Below is an interactive visual representation of all the FedRAMP R4 Moderate controls. The inner ring displays the domains and the outer ring displays the sub-domains. By clicking on the slices within the interactive RPM tool, customers can review the AWS inherited, shared, and the associated Technology and Consulting Partner controls. Try it using the guest login here.

You can also map and align AWS Technology Partner solutions to controls and provide detailed control treatments. This can be used to document, configure, and help automate security and compliance management. Additionally, partner solutions are directly linked to the AWS Marketplace.

AC-3 Access Enforcement – Control Treatments: CloudCheckr allows you to tag AWS accounts and create groups of AWS accounts. These groups are known in CloudCheckr as Multi-Account Views. You can also create a Multi- Account View for all AWS accounts in a single view. Follow the steps here to get your Multi-Account Views up and running. Once that is completed, best practice checks will be pulled from all of the tagged AWS accounts into a single best practices report.

AU-5 Response to Audit Processing Failures – Control Treatment: AWS CloudTrail provides activity monitoring capability for the AWS management plane. CloudTrail records every call into the AWS API. Any activity in AWS is recorded into the CloudTrail logs. CloudTrail logs are written into an S3 bucket as JSON files. A separate file is written every five minutes. Additionally, a different file is created for each AWS account and each region. The CloudTrail UI provides basic functionality to look up events for up to seven days. One of the easiest ways to keep track of your CloudTrail configuration is by using the CloudCheckr best practice checks.

View the recorded webinar with AWS, Allgress, and CloudCheckr to learn how to achieve and demonstrate compliance in the cloud to satisfy the auditors, streamline reporting of technical and non-technical controls, and improve workflow across your key stakeholders.

A Guide to Backup and Recovery in the DoD

on | in government |

As the growth of Department of Defense (DoD) data accelerates, the task of protecting it becomes more challenging. Questions about the durability and scalability of backup methods are commonplace, including this one: How does the cloud help meet my backup and archival needs?

The mission-critical nature of data within the DoD means that business continuity ensures that tech infrastructure and systems continue to operate or recover quickly, despite serious disasters. Currently, defense agencies may be backing up to tape, sending data to a base or contractor site, or sending to a third party to distribute and store with little control and significant expense. Then, when it is time to do a restore, it can take weeks to recover the petabytes of data.

With the AWS Cloud, those weeks to recover the data can be reduced to hours by using Amazon Simple Storage Service (Amazon S3) or Amazon Glacier for long-term backup. DoD backup data can sit in any AWS Region in the US, not only reducing costs but also reducing the requirements to provide backup connectivity.

Public sector organizations are using the AWS Cloud to enable faster DR of their critical IT systems without incurring the infrastructure expense of a second physical site. The AWS Cloud supports many popular DR architectures from “pilot light” environments that are ready to scale up at a moment’s notice to “hot standby” environments that enable rapid failover. Learn more about how to rapidly recover mission-critical systems in a disaster here.

Where to start?

When you develop a comprehensive strategy for backing up and restoring data, you must first identify the failure or disaster situations that may occur and their potential mission impact. Within the DoD, you must also consider regulatory requirements for data security, privacy, and records retention.

Read below for steps to get started with disaster recovery:

  • Start somewhere and scale up: Choose what needs to failover and what does not. Some things may be more important than others, and some may still be working. A hybrid architecture approach can be an option based on who the mission owner is, the application, connectivity, and the Impact Level. Depending on the backup solution, you could archive to AWS, while maintaining recent backups on-premises.
  • Increase your security posture in the cloud: AWS provides a number of options for access control and encrypting data in transit and at rest.
  • Meet compliance requirements: Data custody and integrity must be maintained. The Commercial Cloud Security Requirements Guide (CC SRG) lays the framework for data classification and how cloud providers and DoD agencies must work to control access. The AWS Cloud meets Impact Level 2 (IL-2) for all CONUS regions, has a PATO for IL-4, and waivers for IL-5 in the AWS GovCloud (US) Region. This allows DoD mission owners to continue to leverage AWS for their mission-critical production applications.
  • Test the system: DR plans can often go untested until a major change is made to the system requiring documentation updates. With AWS, you can test whether the backup was successful, by spinning up and validating the backup data completed successfully and compare it to the existing environment on premises.

In the field, backing up to Amazon S3

AWS works with many of the industry-leading backup and recovery solution providers and backup storage manufacturers. This makes backing up to the cloud even easier by providing direct targeted access via API calls to AWS Cloud storage solutions. Many of these solutions can also help to instantiate backup data tests or entire DR environments in minutes.

For example, defense teams are leveraging CommVault media servers that point to a NetApp AltaVault appliance as an on-premises caching mechanism. The Altavault uses an S3 API call to push the backups to S3 buckets in the AWS GovCloud (US) Region. The customer’s media servers were able to target multiple storage solutions to test the best case scenario, pushing backups to their existing tape library and the Altavault appliance and S3 simultaneously. S3 was determined to be the lowest cost solution for long-term data storage. This solution eliminated the need for their tape library hardware refresh, as well as eliminated the need for off-site tape set rotations, resulting in cost savings and operational improvements.

Download our “Backup and Recovery Approaches Using AWS” whitepaper here for the technical steps agencies take to get started today.


Whether you are interested in backup and recovery, security, or DevOps, there is something for everyone at the AWS Public Sector Summit June 12-14 in Washington, DC. Join Telos and AWS, and register today!

AWS Lambda Is Now Available in the AWS GovCloud (US) Region

on | in government |

Serverless Computing Tailored for Regulated IT Workloads and Sensitive Controlled Unclassified Information (CUI) Data

AWS Lambda, a serverless compute service, is now available in the AWS GovCloud (US) Region, Amazon’s isolated cloud region built for sensitive data and regulated workloads.

Lambda now enables developers to run code in AWS GovCloud (US) without provisioning or managing servers. Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume; there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration commitment.

You can use Lambda to run your code in response to events, such as:

  • Changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.
  • Invoking your code using API calls made using AWS SDKs.

With these capabilities, you can use Lambda to easily build data processing triggers for AWS services such as S3 and DynamoDB, process streaming data stored in Amazon Kinesis, or create your own backend that operates at AWS scale, performance, and security.

How does Lambda work?

Upload your code as Lambda functions and Lambda takes care of everything required to run and scale your code with high availability. Lambda seamlessly deploys your code, runs your code on a high-availability compute infrastructure, and performs all of the administration of the compute resources. This includes server and operating system maintenance, capacity provisioning and automatic scaling, and code monitoring and logging through Amazon CloudWatch. All you need to do is supply your code in one of the languages that Lambda supports (currently Node.js, Java, C#, and Python).

Lambda allows your code to access other AWS services securely through its built-in AWS SDK and integration with AWS Identity and Access Management (IAM).

Lambda can also run your code within a Virtual Private Cloud (VPC) by default. You can optionally configure Lambda to access resources behind your own VPC, allowing you to leverage custom security groups and network access control lists to provide your Lambda functions access to your resources within a VPC.

Examples of Lambda in Use – NASA’s Jet Propulsion Laboratory (JPL) & The Financial Industry Regulatory Authority (FINRA)

JPL is a known innovator in Space. Much of the data JPL uses is sensitive. Running Lambda in AWS GovCloud (US) will allow JPL to run AWS IoT and serverless computing on numerous mission workloads. This will enable JPL to save a lot of money and effortlessly run at huge scale as we search for answers to the big questions regarding life in Space, finding Earth 2.0, protecting Earth, and more.

FINRA used AWS Lambda to build a serverless data processing solution that enables them to perform half a trillion data validations on 37 billion stock market events daily. “We found that Lambda was going to provide us the best solution, for this serverless cloud solution. With Lambda, the system was faster, cheaper, and more scalable. So at the end of the day, we’ve reduced our costs by over 50% … and we can track it daily, even hourly,” said Tim Griesbach, Senior Director, FINRA, in his 2016 re:Invent talk. Regardless of data volume, any file is available in under one minute. And, they have less infrastructure to manage now.

How to get started

  1. Create an AWS account and select AWS GovCloud (US) as your region in the AWS Management Console.
  2. Choose Lambda in the AWS Management Console, and select your function by uploading your code (or building it right in the Lambda console) and choosing the memory, timeout, and IAM role.
  3. Specify the AWS resource and event to trigger the function, either a particular S3 bucket, DynamoDB table, or Kinesis stream.
  4. When the resource generates the appropriate event, Lambda runs your function and manages the necessary computing resources to keep up with incoming requests.

Learn more about AWS GovCloud (US) here or contact the AWS GovCloud (US) team with any questions.

The Importance and Necessity of Modernizing Government

on | in government |

Federal agencies are faced with increasing pressure to modernize their aging information technology systems and data centers. From global cybersecurity attacks to the constant pressure to do more with less, agencies must act to migrate to secure and modern IT systems. Today, the U.S. House of Representatives took an important step in providing agencies the necessary funding tools to modernize outdated federal systems by passing the Modernizing Government Technology (MGT) Act.

“Technology is evolving at a rapid pace and our citizens deserve a digital experience that keeps pace with innovation. We are pleased to see the bipartisan and bicameral Modernizing Government Technology Act, which enables the federal government to take advantage of commercial cloud services to lower IT costs, strengthen cybersecurity, and provide quick access to tremendous computing power and innovation,” said Teresa Carlson, Vice President, Worldwide Public Sector, AWS.

Cloud computing allows agencies to focus more on what matters most: their mission. Whether reducing wait times for veterans receiving healthcare, encouraging scientific research and space exploration, or providing first-rate education to the next generation, government has the ability to leverage technology to deliver better, faster, and more secure services to citizens.

“We applaud the numerous sponsors of the MGT Act for their leadership on this important piece of U.S. legislation and hope this will enable federal agencies to get the full benefit of commercial cloud services and emerging technologies,” said Teresa.

Learn more about how the cloud paves the way for innovation and supports world-changing projects in government here.

In Pursuit of a 1 Hour, $10 Genome Annotation

on | in government |

There are hundreds of scientists at the Smithsonian Institution who study just about every kind of life on earth, from animals and plants to fungi and bacteria. Since the initial publication of the human genome project in 2001, DNA sequencing technology has become more efficient and cost-effective, making it possible for individual biodiversity scientists to generate genome resources for their organisms of interest. These genomes can be the gateway to new research questions that were previously unanswerable.

For example, biodiversity genomics scientists face special challenges because they seek to understand genomes that range dramatically in size and complexity (e.g. some plant genomes are more than 10 times larger than the human genome). These scientists need agile software and hardware solutions that can be frequently updated to reflect the ever-increasing data behind algorithms and models.

To tackle these challenges, the Smithsonian’s Office of the Chief Information Officer recently established a Data Science Team, including Dr. Rebecca Dikow and Dr. Paul Frandsen. Part of their mission is to implement solutions that will accelerate science and lower the bar for entry to genomics research, not only for Smithsonian scientists but for biodiversity researchers in general. Although many large institutions have computing resources available to their researchers, there is a queue limit and significant costs to operating a high performance computing cluster. In addition, many smaller research institutions and universities may not have access to such resources.

Dikow and Frandsen are collaborating with AWS and Intel to improve a critical part of the genome analysis pipeline – annotation. Genome annotation is the process of identifying the locations of genes and other genomic features and determining their function, the first step in downstream applications of genomic data.

“Cloud technologies are a natural choice for annotation because different parts of a genome assembly (contigs or scaffolds) can be annotated in parallel, with the results being knitted together in a final step,” said Dikow. “The ability to scale up to many instances for brief periods will make annotation fast while remaining inexpensive.”

The Smithsonian’s Data Science Team is implementing existing annotation pipelines, such as MAKER (Canterel et al., 2008) and WQ_MAKER (Thrasher et al., 2012), as well as developing their own using the workflow engine Toil. Toil uses Common Workflow Language (CWL), which will allow the tools developed to be modular, portable, and scalable across thousands of AWS instances.

What makes these pipelines complex is the need to process each genome scaffold with multiple software tools in turn and to keep track of thousands of intermediate files and any failed tasks. The team has successfully implemented the first step in the annotation pipeline in Toil, which includes masking genome repeats with RepeatMasker (Smit et al., 2015), across 10 c3.xlarge instances. As they continue to make progress in the coming months, their code will be available on GitHub.

Rebecca Dikow presented the team’s progress at the first Global Biodiversity Genomics “BioGenomics” conference held in Washington DC , which was hosted by the Smithsonian Institution. This conference was a gathering of more than 300 genome and biodiversity scientists that focused on the methods and analysis of biodiverse genome data. There was great interest in the annotation pipeline currently under development. Check out the “Improving genome annotation strategies for biodiverse species using cloud technologies” slide deck presented at the conference here.

Achieve Total Cost of Operation Benefits Using Cloud

on | in Education, government, Nonprofit |

A core reason organizations adopt a cloud IT infrastructure is to save money. The traditional approach of analyzing Total Cost of Ownership no longer applies when you move to the cloud. Cloud services provide the opportunity for you to use only what you need and pay only for what you use. We refer to this new paradigm as the Total Cost of Operation in our latest white paper on “Maximizing Value with AWS.” You can use Total Cost of Operation (TCO) analysis methodologies to compare the costs of owning a traditional data center with the costs of operating your environment using AWS cloud services.

Get started with these cost-saving tips and download the whitepaper for more details:

  1. Create a culture of cost management: All teams can help manage costs, and cost optimization should be everyone’s responsibility. There are many variables that affect cost, with different levers that can be pulled to drive operational excellence.
  2. Start with an understanding of current costs: Having a clear understanding of your existing infrastructure and migration costs and then projecting your savings will help you calculate payback time, estimate ROI, and maximize the value your organization gains from migrating to AWS.
  3. Select the right plan for specific workloads: Moving business applications to the AWS Cloud helps organizations simplify infrastructure management, deploy new services faster, provide greater availability, and lower costs.
  4. Employ best practices: AWS delivers a robust set of services specifically designed for the unique security, compliance, privacy, and governance requirements of large organizations.

With a technology platform that is both broad and deep, professional services and support organizations, training programs, and an ecosystem that is tens of thousands of partners strong, AWS can help you move faster and do more.

Download the whitepaper to learn more.


Learn more about how to save money in the cloud and join CloudCheckr and AWS at the AWS Public Sector Summit June 12-14, 2017 in Washington, DC. Register today!

Virtual Learning and the Power of Technology for the Future of Learning

on | in Education |

Breaking down barriers to opportunity is a top priority for many educational organizations. By expanding learning beyond the confines of a physical classroom, technology helps increase access to courses and level the playing field for students.

For schools and educators, the cloud offers not only cost savings and agility, but also the opportunity to develop breakthroughs in educational models and student engagement. For students, it means access to the resources they need, especially in geographically dispersed areas where access might be limited.

For example, Idaho Digital Learning (IDL), the state-sponsored online school serving K-12 students across all of Idaho, has been an early adopter of cloud in K12 education. Idaho Digital Learning offers an assortment of courses including core curriculum, credit recovery, electives, Dual Credit and Advanced Placement. These courses are offered in Cohort, Flex, and Hybrid formats to accommodate the nearly 27,000 enrolled students’ schedules and learning styles.

Through a strong collaboration between industry, school districts, and the state government, IDL accomplishes its mission to build the pipeline of talent to fill the growing number of STEM (science, technology, engineering and math) jobs, which can be a challenge in a rural state like Idaho.

Idaho Digital Learning and AWS

Idaho Digital Learning has been able to use the power of technology to flexibly, conveniently, reliably, and securely deliver content across school districts by providing virtual learning using AWS.

To be able to provide the critical anytime, anywhere access to smaller schools, Idaho Digital Learning migrated all-in on AWS in 2014. The organization uses Amazon Elastic Compute Cloud (Amazon EC2), Amazon Route 53, Amazon Simple Storage Service (Amazon S3), and Amazon CloudFront to create access and opportunity for all Idaho students and educators through digital learning.

Some of the benefits realized by IDL include:

  • Continuity of Operations (no matter the location) – Moving physical locations to a virtualized model shared by districts with multiple failovers provides for always-on access and greater continuity of operations for all users. “Even when faced with incidents, such as a squirrel chewing a line or a snowstorm taking down the power, we were able to provide the services to our students,” said William Dembi, Infrastructure Specialist, Idaho Digital Learning. “The biggest benefit was business continuity. Previously, with on-premises, there were no failsafes. Now, because the solution is virtualized and backed up on AWS, we have peace of mind that we can deliver the content to students and educators, no matter what might pop up and no matter where they are.”
  • Security of Student Data – Protecting student data is a top priority with virtual learning. AWS makes many common security practices simple and effective. The Virtual Private Gateway service creates tunnels for encrypted traffic between IDL’s AWS infrastructure and on-premises network. Only SQL traffic is able to reach the SQL servers. NAT Gateways are utilized to protect private subnets from internet traffic. Only requests originating from the private subnet, like Windows updates, are allowed on the private subnets and instances. “Hosting our sensitive data in the cloud eliminates any physical security needs, such as locks and access cards. IAM is utilized to enforce MFA on users with elevated permissions,” said William. “We currently have daily backups of our SIS database that are migrated directly to Amazon Glacier. These backups are encrypted and extremely cost effective, which allows for terabytes of backup uploaded at a minimal cost. The other added bonus has been ease of use. It’s super easy with AWS to just put in a credit card and start working.”
  • Data Accessibility and Efficiency – Currently IDLA utilizes more than one Learning Management System (LMS), depending on whether a student is taking a cohort, blended, or flex course. Although these formats vary significantly, especially when it comes to grading versus mastery-based outcomes, a lot of the learning content can be similar. “We had a situation where our teachers had to develop content twice for each LMS, which doubled their work. To solve this issue, we leveraged Amazon S3 as an easily accessible data store for our course content,” said William. “Because we uploaded our content to an LMS-agnostic data store (rather than uploading it to something like Blackboard’s content collection), we were able to develop the content once and just embed it twice. This saved a major amount of development time for our teachers.”

Learn more about the AWS tools to help every student get the attention needed to thrive in and out of the classroom.

Announcing USAspending.gov on an Amazon RDS Snapshot

on | in government |

The Digital Accountability and Transparency Act of 2014 (DATA Act) aims to make government agency spending more transparent to citizens by making financial data easily accessible and by establishing common standards for the data all government agencies collect and share on the government website, USAspending.gov.

We are pleased to announce that the USAspending.gov database is now available for anyone to access via Amazon RDS. USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more.

The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on the AWS Public Dataset landing page.

Now that this data is available as a public snapshot on Amazon RDS, anyone can get a copy of the USAspending.gov’s entire production database for their own use within minutes. Researchers and businesses who want to work with real data about US Government spending can quickly combine it with their own data or other data resources.

When data is made publicly available on AWS, anyone can analyze any volume of data without needing to download or store it themselves, enabling more innovation, more quickly. Users can use this data with the entire suite of AWS data analytics products and easily collaborate with other AWS users.

Learn more about how to launch your copy of the snapshot and how Amazon RDS can be used to share an entire relational database quickly and easily in the blog post here.