AWS Public Sector Blog

Tag: compute

AWS branded background with text overlay that says "St. Louis University uses AWS to make big data accessible for researchers"

St. Louis University uses AWS to make big data accessible for researchers

The research team at SLU’s Sinquefield Center for Applied Economic Research (SCAER) required vast quantities of anonymized cell phone data in order to study the impacts of large-scale social problems. SCAER needed to store, clean, and process 450 terabytes of data, so it worked with Amazon Web Services (AWS) to create a fast, cost-effective solution for managing its growing quantities of data.

Building hybrid satellite imagery processing pipelines in AWS

Building hybrid satellite imagery processing pipelines in AWS

In this blog post, learn how companies operating in AWS can design architectures that maximize flexibility so they can support both cloud and on-premises deployment use cases for their satellite imagery processing workloads with minimal modifications. 

teacher in front of chalkboard lecturing students on laptop

Data is helping EdTechs shape the next generation of solutions

Forrester estimates that data-driven businesses are growing at an average of more than 30 percent annually. This is also happening at education technology companies. With new data sources have emerging, including real-time streaming data from virtual classrooms, mobile engagement, unique usage, and new learners, these data sources are shaping the next generation of EdTech products that engage learners meaningfully around the world. Learn how four AWS EdStart Members are utilizing data to power their solutions.

How to set up Galaxy for research on AWS using Amazon Lightsail

Galaxy is a scientific workflow, data integration, and digital preservation platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomics research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system, running on everything from academic mainframes to personal computers. But researchers and organizations may worry about capacity and the accessibility of compute power for those with limited or restrictive budgets. In this blog post, we explain how to implement Galaxy on the cloud at a predictable cost within your research or grant budget with Amazon Lightsail.

How to deploy HL7-based provider notifications on AWS Cloud

Electronic notifications of patient events are a vital mechanism for care providers to improve care coordination and promote appropriate follow-up care in a timely manner. This post shows how a combination of Amazon Web Services (AWS) technologies, like AWS Lambda, Amazon Comprehend Medical, and AWS Fargate, can effectively manage and deliver actionable data to help healthcare customers deliver electronic notifications in a secure and efficient way.

Analyze terabyte-scale geospatial datasets with Dask and Jupyter on AWS

Terabytes of Earth Observation (EO) data are collected each day, quickly leading to petabyte-scale datasets. By bringing these datasets to the cloud, users can use the compute and analytics resources of the cloud to reliably scale with growing needs. In this post, we show you how to set up a Pangeo solution with Kubernetes, Dask, and Jupyter notebooks step-by-step on Amazon Web Services (AWS), to automatically scale cloud compute resources and parallelize workloads across multiple Dask worker nodes.

Lake Michigan lighthouse

Modeling clouds in the cloud for air pollution planning: 3 tips from LADCO on using HPC

In the spring of 2019, environmental modelers at the Lake Michigan Air Directors Consortium (LADCO) had a new problem to solve. Emerging research on air pollution along the shores of the Great Lakes in the United States showed that to properly simulate the pollution episodes in the region we needed to apply our models at a finer spatial granularity than the computational capacity of our in-house HPC cluster could handle. The LADCO modelers turned to AWS ParallelCluster to access the HPC resources needed to do this modeling faster and scale for our member states.

pFaces targets heterogenous hardware configurations (HWCs) combining compute nodes (CNs) of CPUs, GPUs and hardware accelerators (HWAs). A web-based interface helps developers design parallel algorithms and run them on targeted HWCs.

TUM researcher finds new approach to safety-critical systems using parallelized algorithms on AWS

Mahmoud Khaled, a PhD student at TUM and a research assistant at LMU, researches how to improve safety-critical systems that require large amounts of compute power. Using AWS, Khaled’s research project, pFaces, accelerates parallelized algorithms and controls computational complexity to speed the time to science. His project findings introduce a new way to design and deploy verified control software for safety-critical systems, such as autonomous vehicles.

What’s New for AWS Compute Services from re:Invent 2016

We recently recapped the security and compliance updates announced at this year’s re:Invent that are important to our public sector customers. AWS also expanded upon its core foundational services – compute and storage – by announcing new game-changing services and special features. Check out the below compute updates and our follow-up post covering the storage […]