AWS Public Sector Blog

Tag: AWS CodeCommit

Citi Logik helps governments drive action on transportation insights with AWS

Citi Logik is a UK-based government technology (GovTech) company and AWS Partner with Amazon Web Services (AWS). Citi Logik uses AWS to enhance anonymised raw mobile network data (MND) so organisations can identify trends in the flow of people across a variety of different transportation modes. Citi Logik provides their customers, including the West Yorkshire Combined Authority and Wiltshire County Council, with valuable insights to help them make informed decisions about future transportation planning and urban planning development.

One small team created a cloud-based predictive modeling solution to improve healthcare services in the UK

How do you predict and prepare for your citizens’ health and wellness needs during the COVID-19 pandemic? Healthier Lancashire and South Cumbria Integrated Care System (ICS) quickly scaled a platform on AWS to support the 1.8 million people in their region with Nexus Intelligence, an interactive health intelligence application with a suite of predictive models against various measures of need and health outcomes. Nexus Intelligence not only supported the ICS response to the pandemic, but is expected to help reconfigure and re-invest in services to improve the health and well-being of the population and reduce health inequalities.

How to manage Amazon SageMaker code with AWS CodeCommit

How to manage Amazon SageMaker code with AWS CodeCommit

To help protect investments on ML, government organizations can securely store ML source code. Storing Amazon SageMaker Studio code in an AWS CodeCommit repository enables you to keep them as standalone documents to reuse in the future. SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps required to prepare data and build, train, and deploy models. Read on to learn the steps to configure a git-based repository on CodeCommit to manage ML code developed with SageMaker.

Photo by Hunter Harritt on Unsplash

Modern data engineering in higher ed: Doing DataOps atop a data lake on AWS

Modern data engineering covers several key components of building a modern data lake. Most databases and data warehouses, to an extent, do not lend themselves well to a DevOps model. DataOps grew out of frustrations trying to build a scalable, reusable data pipeline in an automated fashion. DataOps was founded on applying DevOps principles on top of data lakes to help build automated solutions in a more agile manner. With DataOps, users apply principles of data processing on the data lake to curate and collect the transformed data for downstream processing. One reason that DevOps was hard on databases was because testing was hard to automate on such systems. At California State University Chancellors Office (CSUCO), we took a different approach by residing most of our logic with a programming framework that allows us to build a testable platform. Learn how to apply DataOps in ten steps.

Photo by Brandon Griggs on Unsplash

T Digital shares lessons learned about flexibility, agility, and cost savings using AWS

T-Digital, a division of Tshwane University Technology Enterprise Holding (TUTEH) in South Africa, built TRes, a digital platform for students living in student housing and for accommodation providers. TRes connects students with available housing and verified and authorized property owners. It addresses student accommodation needs and helps verified and approved property owners fully allocate their residences, while alleviating administrative burden. With help from AWS Professional Services, T-Digital experienced flexibility, agility, and realized cost savings.