AWS Public Sector Blog
UC Davis Health Cloud Innovation Center, powered by AWS, uses generative AI to fight health misinformation
Former National Institute of Allergy and Infectious Diseases (NIAID) Director Anthony Fauci called health misinformation “the enemy of pandemic control.” Around the US, public health officials are grappling with how to keep their communities safe from the potentially deadly effects of medical misinformation, disinformation, and malinformation.
Misinformation is false or inaccurate information spread in error, regardless of intent. Disinformation is misinformation that is knowingly or intentionally spread by bad actors. Malinformation is information based on fact but purposely taken out of context to mislead others.
Exacerbated by social media and the COVID-19 pandemic, these types of information threaten the well-being of patients and spark burnout among public officials. Members of overburdened and understaffed health departments lack the tools to thwart dangerous misinformation threats before they become widespread. This proliferation of misinformation can also prey on the skepticism, fears, and disadvantaged positions of certain marginalized groups, contributing to health inequity.
In addition to harming public well-being, health misinformation has dastardly effects on the US economy. When factoring in the costs of hospitalization, the valuation of lives lost, and the effects of long-term COVID-19, experts estimated in 2021 that COVID-19 vaccine misinformation was costing the US between $50 million and $300 million daily.
The University of Pittsburgh, the University of Illinois Urbana-Champaign (UIUC), the University of California Davis Health Cloud Innovation Center (UCDH CIC)—powered by Amazon Web Services (AWS)—and the AWS Digital Innovation (DI) team have seized on Dr. Fauci’s call-to-action to “flood the system with correct information.” The result is a prototype that uses machine learning (ML) and generative artificial intelligence (AI) to transform the public health communications landscape by giving officials the tools they need to keep their communities informed and safe.
Working backwards to transform the public health ecosystem
The UCDH CIC launched in November 2021 with the goal of solving real-world problems rooted in health equity and digital health innovation. The CIC began collaborating with the University of Pittsburgh and the AWS DI team in early 2023 after identifying an opportunity to use technology to reduce the burden on public health officials, improve patient outcomes, and address systemic inequities. The DI and CIC teams then came together with subject matter experts from the University of Pittsburgh, UIUC, and other global healthcare leaders to facilitate a two-day Working Backwards Workshop in Washington, DC.
A key artifact from the Working Backwards process is a visual customer journey – or storyboard – that details the problem and envisioned solution from the perspective of the designated customer. The following storyboard was produced to highlight the journey of a public health official.
The ensuing conversations laid the groundwork for Project Heal, an open source, AI/ML-based toolkit concept that will empower public health officials worldwide to protect their communities from health misinformation. The solution will use ML, generative AI, and predictive analytics to aggregate information on health misinformation trends and provide the ability for public health officials to generate communications addressing misinformation for their respective communities.
Project Heal will allow public health officials to more efficiently manage workloads and shift from reactivity to proactivity. The resulting community education on health misinformation trends will also improve patient outcomes and empower individuals to make more informed decisions about their health.
Breaking down Project Heal
Manual fact-checkers struggle to keep up with the decentralized, rapid online nature of health misinformation generation. One of the key functionalities of Project Heal will be the ability to classify and detect emerging misinformation before it has an opportunity to proliferate. Ingested information will be evaluated using trained ML models, enabling the tool to classify the likelihood of a statement having misleading content and allowing for categorization based on the statement’s entities and context. Misleading statements will then be evaluated by a subsystem to help score the severity of the threat to human health.
The detection engine will be built using graph neural networks, which will be supported by a large, scalable, fully managed graph database (Amazon Neptune). Amazon Comprehend will be used to support keyword and entity extraction from content. A human feedback loop will consistently audit and improve the model using Amazon Augmented AI (Amazon A2I). Both detection and scoring will be supported by Amazon SageMaker, a fully managed service to prepare data and build, train, and deploy ML models for any use. Additionally, generative AI will support the summarization and grouping of related misinformation content.
The following conceptual architectural diagram highlights a potential deployment approach for the detection engine.
One of the most innovative features of Project Heal will be the platform’s ability to generate tailored communication responses to misleading information. A large source of health inequity derives from public health officials’ inability to account for unique cultural, historical, and linguistic nuances that affect how various demographics respond to both misinformed rumors and corrective counter-messaging. Project Heal will give public health professionals the ability to generate, edit, tweak, and adapt counter messaging of false claims for each population in their community. To achieve this, Project Heal will use generative AI foundation models (FMs) through Amazon Bedrock, such as Amazon Titan. Supporting evidence, built from trusted sources of information, will be available to users via Retrieval-Augmented Generation (RAG), an approach that reduces some of the shortcomings of large language model (LLM)-based queries. Through this technique, Amazon Bedrock is able to generate more personalized messaging by combining trusted information and user preferences. The following screenshot showcases what this functionality could look like.
Real-life users respond to Project Heal
After collaborating with subject matter experts to inform a high-fidelity UI/UX mockup of Project Heal, the UCDH CIC and DI teams engaged five public health experts in the US and Chile to solicit their feedback. User response was overwhelmingly positive and highlighted the urgent need for this product in the public health ecosystem. One test user stated that “this platform would be like having an additional entire team of employees working for us,” indicating the ability of the tool to alleviate the burden on understaffed health departments. Another user who regularly interfaces with their community to deliver presentations on health trends and threats praised the application’s generative communication abilities.
Due to Project Heal’s intentional delineation between verified and non-verified data sources, all five users stated that they would trust the tool’s ability to provide them with accurate information and communication. This is particularly promising, as a lack of public trust in AI technologies has proven to be a significant hurdle in driving adoption for certain AI/ML solutions.
Conclusion
It is clear that health misinformation remains a major threat to patient wellness in the US and beyond. It is more important than ever to have a tool that can rapidly adapt to and fight back against the frequently changing landscape of those who spread harmful information. Learn more about generative AI on AWS or submit a challenge rooted in digital health equity at the UCDH CIC website.
Contributing AWS authors: Elle Lindley, Chris Robinson, and Ellen Butters.