Forrester estimates that data-driven businesses are growing at an average of more than 30 percent annually. This is also happening at education technology companies. With new data sources have emerging, including real-time streaming data from virtual classrooms, mobile engagement, unique usage, and new learners, these data sources are shaping the next generation of EdTech products that engage learners meaningfully around the world. Learn how four AWS EdStart Members are utilizing data to power their solutions.
Galaxy is a scientific workflow, data integration, and digital preservation platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience. Although it was initially developed for genomics research, it is largely domain agnostic and is now used as a general bioinformatics workflow management system, running on everything from academic mainframes to personal computers. But researchers and organizations may worry about capacity and the accessibility of compute power for those with limited or restrictive budgets. In this blog post, we explain how to implement Galaxy on the cloud at a predictable cost within your research or grant budget with Amazon Lightsail.
Electronic notifications of patient events are a vital mechanism for care providers to improve care coordination and promote appropriate follow-up care in a timely manner. This post shows how a combination of Amazon Web Services (AWS) technologies, like AWS Lambda, Amazon Comprehend Medical, and AWS Fargate, can effectively manage and deliver actionable data to help healthcare customers deliver electronic notifications in a secure and efficient way.
Terabytes of Earth Observation (EO) data are collected each day, quickly leading to petabyte-scale datasets. By bringing these datasets to the cloud, users can use the compute and analytics resources of the cloud to reliably scale with growing needs. In this post, we show you how to set up a Pangeo solution with Kubernetes, Dask, and Jupyter notebooks step-by-step on Amazon Web Services (AWS), to automatically scale cloud compute resources and parallelize workloads across multiple Dask worker nodes.
In the spring of 2019, environmental modelers at the Lake Michigan Air Directors Consortium (LADCO) had a new problem to solve. Emerging research on air pollution along the shores of the Great Lakes in the United States showed that to properly simulate the pollution episodes in the region we needed to apply our models at a finer spatial granularity than the computational capacity of our in-house HPC cluster could handle. The LADCO modelers turned to AWS ParallelCluster to access the HPC resources needed to do this modeling faster and scale for our member states.
Mahmoud Khaled, a PhD student at TUM and a research assistant at LMU, researches how to improve safety-critical systems that require large amounts of compute power. Using AWS, Khaled’s research project, pFaces, accelerates parallelized algorithms and controls computational complexity to speed the time to science. His project findings introduce a new way to design and deploy verified control software for safety-critical systems, such as autonomous vehicles.
We recently recapped the security and compliance updates announced at this year’s re:Invent that are important to our public sector customers. AWS also expanded upon its core foundational services – compute and storage – by announcing new game-changing services and special features. Check out the below compute updates and our follow-up post covering the storage […]