Category: government


Secure Network Connections: An Evaluation of the US Trusted Internet Connections Program

Access the first guide in the new AWS Government Handbook Series: Secure Network Connections: An evaluation of the US Trusted Internet Connections program.

As a global first mover, the U.S. Government has invested considerable time in developing approaches to network perimeter security. However, while these approaches have been operating in the traditional IT space, additional innovation and iteration is necessary to better align with newer, non-traditional technologies, such as cloud.

This document discusses the following:

  • A summary of lessons learned from AWS’s work with various government agencies, including the Department of Homeland Security (DHS).
  • The various federal-wide secure network connections programs, particularly the “Trusted Internet Connections” (TIC) initiative.
  • The AWS policy position and recommendations for how governments can consider establishing or enhancing their cloud-based network perimeter monitoring capabilities.

Download the AWS Government Handbook.

We are encouraged by the government’s evolution that has moved them toward innovative, cloud-adaptive solutions to achieve network perimeter monitoring objectives in the cloud. We are committed to ongoing collaboration with the governments worldwide that are evaluating the merits, best practices, and lessons learned from the TIC program.

Look for the next handbook in the series coming later this year.

Automatically Discover, Classify, and Protect Your Data

In our post, Building a Cloud-Specific Incident Response Plan, we walked through a hypothetical incident response (IR) managed on AWS with the Johns Hopkins University Applied Physics Laboratory (APL). With the recent launch of Amazon Macie, a new data classification and security service, you have additional controls to understand the type of data stored in your Amazon Simple Storage Service (Amazon S3). Amazon Macie can also help you meet your compliance objectives, with the ability to set up automated mechanisms to track and report security incidents.

Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or stored. The fully managed service continuously monitors data access activity for anomalies, and generates detailed alerts when it detects risk of unauthorized access or inadvertent data leaks.

Benefits of Amazon Macie for public sector organizations include:

  • Superior Visibility of Your Data – Amazon Macie makes it easy for security administrators to have management visibility into data storage environments, beginning with Amazon S3, with additional AWS data stores coming soon.
  • Simple to Set Up, Easy to Manage – Getting started with Amazon Macie is fast and easy. Log into the AWS console, select the Amazon Macie service, and provide the AWS accounts you would like to protect.
  • Data Security Automation Through Machine Learning – Amazon Macie uses machine learning to automate the process of discovering, classifying, and protecting data stored in AWS. This helps you better understand where sensitive information is stored and how it’s being accessed, including user authentications and access patterns.
  • Custom Alert Monitoring with Cloudwatch – Amazon Macie can send all findings to Amazon CloudWatch Events. This allows you to build custom remediation and alert management for your existing security ticketing systems.

Customers including Edmunds, Netflix, and Autodesk are using Amazon Macie to provide insights that will help them tackle security challenges. Learn more about how to get started with Amazon Macie. If you are a first-time user of Amazon Macie, we recommend that you begin by reading the Macie documentation.

Structuring the Cloud Deal: End-of-Year Buying Options

The shift from buying hardware to accessing cloud services makes technology faster, easier and less expensive. It also allows you to buy using operating expenses (OpEx) instead of capital expenses (CapEx). But what if you have money that needs to be spent at the end of your budget year? Take a look at your options below.

On-Demand Instances: On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments or upfront payments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use.

Reserved Instances: Reserved Instances (RIs) provide you with the ability to invest in a larger upfront payment to receive a greater discount. By using Reserved Instances, you can minimize risks, more predictably manage budgets, and comply with policies that require longer-term commitments. Additionally, you are assured that your Reserved Instance will always be available for the operating system and Availability Zone in which you purchased it.

Dedicated Instances: Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that’s dedicated to a single customer. Your Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances. Pay for Dedicated Instances On-Demand, save up to 70% by purchasing Reserved Instances, or save up to 90% by purchasing Spot Instances.

Spot Instances: Spot Instances provide you with the ability to purchase compute capacity with no upfront commitment and at hourly rates, as well as specify the maximum hourly price you are willing to pay to run a particular instance type – like Amazon EC2.

Learn what type of instance is right for you by visiting a special section of our site focused on How to Buy the Cloud.

AWS’s breadth of services and pricing options offer the flexibility to effectively manage your costs and still keep the performance and capacity your agency requires. With AWS, you can easily use the spot market, save when you reserve, and only pay for what you use.

For more information, join us for our “How to Buy Cloud Computing Services for Your Agency” webinar.

Login.gov on AWS: One Username and Password for Every Public User

Login.gov delivers an identity platform for public users interacting with government websites by combining maximum security standards, open source technologies, and the AWS Cloud.

The goal is simple: one username and one password for every public user who interacts with government websites. To accomplish this, login.gov merges a user-focused design with the highest security standards from the National Institute for Standards in Technology (NIST) and the Cybersecurity National Action Plan. The team also committed to making login.gov an open project that leverages key technologies—like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), and Amazon Simple Storage Service (Amazon S3)—to build a highly-available and scalable platform.

While performance of the platform is critical, security is paramount to the platform’s success. Login.gov is leveraging AWS services to give the platform the strongest security disposition possible. This includes using AWS to rapidly iterate and keep up with the latest technologies and current cyber threats.

The team can quickly stand up new environments to validate patches, push changes, or test new solutions. Additionally, login.gov is leveraging key AWS services, like Amazon CloudWatch and AWS Key Management Service (KMS), to keep their platform secure. KMS is critical in deploying their ‘vault’ and allowing data encryption unique to every individual.

Login.gov will not only make accessing government websites easier, but it will also aid federal agencies in deploying identity solutions. With the login.gov team managing the system, federal agencies no longer have to spend time, money, and resources developing and maintaining their own identity platform. This means federal agencies can spend fewer resources on developing identity solutions and more resources on their mission.

 

AWS Marketplace Powering GCTC Smart City Solutions

Amazon is collaborating with teams participating in the Global City Teams Challenge (GCTC) – a program led by the National Institute of Standards and Technology’s (NIST), a non-regulatory agency of the United States Department of Commerce – to allow global cities access to smart city solutions directly from AWS.

GCTC helps communities partner with innovators that use networked technologies to solve problems, which range from mass transit improvement to energy management to disaster response. A goal of the GCTC is to promote the emergence of a robust marketplace of replicable, standards-based IoT solutions available to communities worldwide to meet their smart city needs. The Smart City Marketplace will host solutions in the AWS Marketplace, an online store that helps customers discover, purchase, migrate, and immediately start using the software and services they need. These solutions will include the Internet of Things (IoT), analytics, data platform, visualization, and open data.

“AWS Marketplace is a global digital catalog that helps customers find, subscribe to, and deploy the software and services they need to run on AWS. We recently expanded AWS Marketplace to address IoT solutions, and our catalog can now enable customers to build secure and cost effective IoT and smart city solutions. We are pleased to announce several new sellers available in the AWS Marketplace that are part of the Global City Teams Challenge,” said Dave McCann, Vice President, AWS Marketplace, Amazon Web Services. “We want to make it easy for communities around the world to quickly access proven solutions and improve services for their citizens.”

More than 100 teams from around the world are participating in this year’s GCTC, created by NIST and sponsored by the Department of Homeland Security (DHS), and they will experience firsthand this specialized marketplace, where GCTC teams have the opportunity to post their solutions on the AWS Marketplace.

International participants will include cities in Finland, France, Ireland, Italy, Japan, Korea, Nigeria, Portugal, Taiwan, and the United Kingdom.

Learn more about AWS for Smart, Connected and Sustainable Cities.

Artificial Intelligence for Startups

Artificial intelligence (AI) is everywhere in our day-to-day lives – whether it’s ordering household products, searching the Internet, playing music, providing weather forecasts, or many other tasks.

Entrepreneurs all over the world are also embracing solutions AI can offer by improving efficiency and making their products more user-friendly and accessible to consumers.

At Amazon, we’ve been making investments in artificial intelligence for over 20 years, and many of the capabilities customers experience are driven by machine learning. Within AWS, we’re focused on bringing that knowledge and capability to you through three layers of the AI stack: Frameworks and Infrastructure with tools like Apache MXNet and TensorFlow, API-driven Services to quickly add intelligence to applications, and Machine Learning Platforms for data scientists.

AWS experts joined the U.S. Department of State’s Global Innovation through Science and Technology (GIST) webinar on artificial intelligence to share how AI can be leveraged by startups to solve a range of problems in the public and private sectors.

Major takeaways included:

Watch the on-demand webinar on AI for startups.

The U.S. Department of State launched the GIST Initiative in 2011 with the goal of empowering young innovators through in-country training, interactive online programming, direct connections to U.S. experts, and a global pitch competition to promote solutions that address economic and development challenges. GIST participants come from 135 countries. Learn more.

The Boss: A Petascale Database for Large-Scale Neuroscience Powered by Serverless Technologies

The Intelligence Advanced Research Projects Activity (IARPA) Machine Intelligence from Cortical Networks (MICrONS) program seeks to revolutionize machine learning by better understanding the representations, transformations, and learning rules employed by the brain.

We spoke with Dean Kleissas, Research Engineer working on the IARPA MICrONS Project at the Johns Hopkins University Applied Physics Laboratory (JHU/APL), and he shared more about the project, what makes it unique, and how the team leverages serverless technology.

Could you tell us about the IARPA MICrONS Project?

This project partners computer scientists, neuroscientists, biologists, and other researchers from over 30 different institutions to tackle problems in neuroscience and computer science towards improving artificial intelligence. These researchers are developing machine learning frameworks informed and constrained by large-scale neuroimaging and experimentation at a spatial size and resolution never before achieved.

Why is this program different from other attempts to build machine learning based on biological principles?

While current approaches to neural network-based machine learning algorithms are “neurally inspired,” they are not “biofidelic” or “neurally plausible,” meaning they could not be directly implemented using a biological system. Previous attempts to incorporate the brain’s inner workings into machine learning have used statistical summaries of properties of the brain or measurements at low resolution (brain regions) or high resolution (individual neurons or populations of 100’s-1k neurons).

With MICrONS, researchers are attempting to inform machine learning frameworks by interrogating the brain at the “mesoscale,” the scale at which the hypothesized unit of computation, the cortical column, should exist. Teams will measure the functional (how a neuron fires) and structural (how neurons connect) properties of every neuron in a cubic millimeter of mammalian tissue. While a cubic millimeter may sound small, these datasets will be some of the largest ever collected and will contain about 50k-100k neurons and over 100 million synapses. On disk, this results in roughly 2-3 petabytes of image data to store and analyze per tissue sample.

To manage the challenges created by both the collaborative nature of this program and massive amounts of multi-dimensional imaging, the JHU/APL team developed and deployed a novel spatial database called the Boss.

What is the Boss and some of its key features?

The Boss is a multi-dimensional spatial database provided as a managed service on AWS. It stores image data of different modalities with associated annotation data, or the output of an analysis that has labeled source image data with unique 64-bit identifiers. The Boss leverages a storage hierarchy to balance cost with performance. Data is migrated using AWS Lambda from Amazon Simple Storage Service (Amazon S3) to a fast in-memory cache as needed. Image and annotation data is spatially indexed for efficient, arbitrary access to sub-regions of peta-scale datasets. The Boss provides Single Sign-On authentication for third-party integrations, a fine-grained access control system, built in 2D and 3D web-based visualization, a rich REST API, and the ability to auto-scale with varying load.

The Boss is able to auto-scale by leveraging serverless components to provide on-demand capacity. Since users can choose to perform different high bandwidth operations, like data ingest or image downsampling, we needed the Boss to scale to meet each team’s needs and also remain affordable and operate within a fixed budget.

How did your team leverage serverless services when building the data ingest system for the Boss?

During ingest, we move large amounts of data (ranging from terabytes to petabytes) from on-premises temporary storage into the Boss. These data are image stacks in various formats stored locally in different ways. The job of the ingest service is to upload these image files while converting them into the Boss’ internal 3D data representation that allows for more efficient IO and storage.

Since these workflows can be spikey, driven both by researcher’s progress and program timelines, we use serverless services. We do not have to maintain running servers when ingest workflows are not executing and can massively scale processing for short periods of time, on-demand.

We use Amazon S3 for both the temporary storage of image tiles as they are uploaded and the final storage of compressed, reformatted data. Amazon DynamoDB tracks upload progress and maintains indexes of reformatted data stored in the Boss. Amazon Simple Queue Service (SQS) provides scalable task queues so that our distributed upload client application can reliably transfer data into the Boss. Step Functions manage high-level workflows during ingest, such as populating task queues and downsampling data after upload. After working with Step Functions and finding the native JSON scheme challenging to maintain, we created an open source Python package called Heaviside to manage Step Function development and use. AWS Lambda provides scalable, on-demand compute to monitor and update ingest indexes, process and index image data as it is loaded into the Boss, and downsample data for visualization after ingest is complete. By leveraging these services we have been able to achieve sustained ingest rates of over 4gbps from a single user while managing our overall monthly costs.


Thanks for sharing, Dean! Learn more about the system, by watching Dean’s session at the AWS Public Sector Summit here.

Prince George’s County Summer Youth Enrichment Program: Creating Apps for Students

The Prince George’s County internship program culminated with the four teams presenting their apps built on Amazon Alexa, Amazon Lex, Echo Dot, and Echo Show. The applications addressed challenges faced by some public school students, such as reading impairments and language barriers.

Learn about the different challenges and solutions below:

Team I – B.A.S.E (Building Amazing Students Efficiently)

Challenge: Within grades K-5, some students struggle with inadequate comprehension of fundamental math skills.

Solution: Using the Amazon Echo Show, this team created an interactive game that helps students learn the fundamentals of math. By creating skills using the Amazon Echo Dot, students are able to access ad complete assignments, and become motivated to study. They also created a teacher dashboard, which allows teachers to track the real-time progress of their students.

Team II – T Cubed (Teachers Teaching for Tomorrow)

Challenge: On average, school counselors deal with 350-420 students. On the student side, some students do not know how to apply to colleges, scholarships, or financial aid.

Solution: T Cubed created two skills on the Amazon Echo Dot and Amazon Echo Show that help students explore careers and colleges, and guide them through all aspects of the college application process.

Team III – A Square Education through Voice Automation – WINNING TEAM

Challenge: English to Speakers of Other Languages (ESOL) students often have trouble learning the pre-requisites necessary to pass the class.

Solution: This team created four innovative games to help ESOL students better obtain information and language proficiency to pass the test.

Team IV- Simplexa English. Foreign Language. Made Simple

Challenge: With many different languages spoken by students, there is a need to alleviate the language barrier in the classroom.

Solution: Using Amazon Alexa, this team created a flash card-style game to test the basic English proficiency of ESOL students. This application promotes a more interactive experience.

AWS worked with the teams throughout the five-week internship, providing onsite technical support, training, AWS Developer accounts, and funding. Through the program, mentors and interns used AWS Educate online education accounts to learn Alexa programming and use of AWS cloud services.

“By the end of five weeks, the student interns had successfully created functioning apps for Amazon Alexa, Dot, and Show, taking full advantage of the AWS Cloud to quickly learn, develop, and deploy new applications. The program met its goal to prepare Prince George’s County students as the next generation of the IT workforce,” said Sandra Longs Hasty, Program Director, Prince George’s County.

Congratulations to all participating interns!

 

Building a Cloud-Specific Incident Response Plan

In order for your organization to be prepared before a security event occurs, there are unique security visibility, and automation controls that AWS provides. Incident response does not only have to be reactive. With the cloud, your ability to proactively detect, react, and recover can be easier, faster, cheaper, and more effective.

What is an incident?

An incident is an unplanned interruption to an IT service or reduction in the quality of an IT service. Through tools such as AWS CloudTrail, Amazon CloudWatch, AWS Config, and AWS Config Rules, we track, monitor, analyze, and audit events. If these tools identify an event, which is analyzed and qualified as an incident, that “qualifying event” will raise an incident and trigger the incident management process and any appropriate response actions necessary to mitigate the incident.

Setup your AWS environment to prevent a security event

We will walk you through a hypothetical incident response (IR) managed on AWS with the Johns Hopkins University Applied Physics Laboratory (APL).

APL’s scientists, engineers, and analysts serve as trusted advisors and technical experts to the government, ensuring the reliability of complex technologies that safeguard our nation’s security and advance the frontiers of space. APL’s mission requires reliable and elastic infrastructure with agility, while maintaining security, governance, and compliance. APL’s IT cloud team works closely with APL mission areas to provide cloud computing services and infrastructure, and they create the structure for security and incident response monitoring.

Whether it is an IR-4 “Incident Handling” or IR-9 “Information Spillage Response,” the below incident response approach from APL applies to all types of IR.

  1. Preparation: The preparation step is critical. Train IR handlers to be able to respond to cloud-specific events. Ensure logging is enabled using Amazon Elastic Compute Cloud (Amazon EC2), AWS CloudTrail, and VPC Flow Logs, collect and aggregate the logs centrally for correlation and analysis, and use AWS Key Management Service (KMS) to encrypt sensitive data at rest. You should consider multiple AWS sub accounts for isolation with AWS Organizations. With Organizations, you can create separate accounts along business lines or mission areas which also limits the “blast radius” should a breach occur. For governance, you can apply policies to each of those sub accounts from the AWS master account.
  2. Identification: Also known as Detection, you use behavioral-based rules for identifying and detecting breaches or spills, or, you can be notified about which user accounts and systems need “cleaning up.” You should open up a case number with AWS Support for cross-validation.
  3. Containment: Use AWS Command Line Interface (CLI) or software development kits for quick containment using pre-defined restrictive security groups. Save the current security group of the host or instance, then isolate the host using restrictive ingress and egress security group rules.
  4. Investigation: Once isolated, determine and analyze the correlation, threat, and timeline.
  5. Eradication: Secure wipe-files. Response times may be faster with automation. After secure wipe, delete any KMS data keys, if used.
  6. Recovery: Restore network access to original state.
  7. Follow-up: Verify deletion of data keys (if KMS was used), cross-validate with Amazon Support, and report findings and response actions.

Watch the Incident Response in the Cloud session from the AWS Public Sector Summit in Washington, DC here for a more detailed discussion with Conrad Fernandes, Cloud Cyber Security Lead, Johns Hopkins University Applied Physics Lab (JHU APL).

AWS Marketplace Now Available in the AWS GovCloud (US) Region

AWS Marketplace now enables customers to discover and subscribe to software that supports regulated workloads through the AWS Marketplace for the AWS GovCloud (US) RegionAWS GovCloud (US) is an isolated AWS Region designed to host sensitive data and regulated workloads in the cloud, assisting customers who have U.S. federal, state, and local government compliance requirements.

With the release of AWS Marketplace for AWS GovCloud (US), Independent Software Vendors (ISVs) will be able to create custom releases for government customers that need regulatory compliance. ISVs will be able to maintain offerings as part of their non-government product creation and update the process. They can also leverage the AWS Marketplace Self Service Listings portal to upload their latest software into the region, allowing the ISV to maintain their compliance status and relieve that burden from their customers.

At launch, customers can now deploy over 589 different products into the AWS GovCloud (US) Region. AWS Marketplace simplifies procurement and pricing of software with a variety of pricing models, including pay-as-you-go, monthly, and annual subscription contract terms.

Many products offer free trials allowing AWS GovCloud (US) users to “try before they buy.” For example, AWS Marketplace for AWS GovCloud (US) supports Bring-Your-Own-License (BYOL) to help easily migrate and consolidate existing software licenses and applications.

For government agencies or system integrators who deploy solutions into the AWS GovCloud (US) Region on behalf of a government entity, users can discover and procure software in various ways:

  • Users can select the AWS GovCloud (US) Region to view a listing of all products available in Region and manually launch the selected AMI into the AWS GovCloud (US) Region once the users’ AWS GovCloud (US) account has been authenticated.
  • Customers can also deploy directly from the AWS GovCloud (US) EC2 console and select the “AWS Marketplace” tab, just as they would in the commercial AWS Marketplace. Upon product selection and instance configuration, the customer launches the selected AMI into the AWS GovCloud (US) Region.

Learn more about AWS Marketplace for AWS GovCloud (US) here.