Executive Conversations: Using the cloud to develop accessible, interoperable healthcare data for cancer and beyond
Whether advancing evidence-based policy, value-based healthcare, or community wellness, access to longitudinal health data is a force multiplier for progress. Dr. Jay Schnitzer, Chief Technology Officer (CTO) and Chief Medical Officer (CMO) at MITRE, recently sat down with Valerie Delva, Worldwide Head, Data/AI ML Strategy and Solutions at Amazon Web Services (AWS) to discuss the importance and implications of aligning standards for health data.
This Executive Conversation is part of a series of discussions held with leaders who are pushing the frontiers of the healthcare and life sciences industry with cloud technology.
Valerie Delva (VD): Can you share background on MITRE’s mission and your role?
Jay Schnitzer (JS): MITRE’s mission is to solve problems for a safer world. As a nonprofit organization, we partner with government, industry, and academia to address big, complex issues, such as improving our nation’s public health and well-being. Two decades of working with our federal sponsors has provided them with the best science, technology, engineering, and mathematics expertise to solve America’s biggest healthcare challenges, including managing federally funded research and development centers.
I work in a dual role—as both Chief Technology Officer (CTO) and Chief Medical Officer (CMO). As CTO, I oversee our internal, independent research and development efforts. And as CMO, I help drive our programs and projects related to health, healthcare, medicine, or life sciences.
VD: You are doing some ground-breaking work unlocking siloed healthcare data. What are your views on the latest government guidelines around data sharing in healthcare? What are potential implications of these changes?
JS: In 2022, an order from the White House’s Office of Science and Technology Policy introduced new data sharing requirements for federally-funded research. Researchers across academic institutions, nonprofits, and federal agencies are mandated to make research publications and their underlying data accessible to the public at no cost, immediately on publication. This new model gives researchers direct and equitable access to the raw data behind scientific publications, maximizing data reusability as well as experimental reproducibility.
This is an enormous step. Yet, it is only the beginning and calls for much more work to continue this trajectory. The immediate step two, in my mind, should be to unlock the patient and clinical data that resides in electronic health records (EHRs) and within healthcare delivery systems. Access to these treasure troves of population-scale data will unlock opportunities not imagined before.
VD: From your perspective, what are some challenges in unlocking these data?
JS: It’s mainly the lack of a standard data model. EHR vendors create systems and data standards which are largely inconsistent with each other. The difference in data types, fields, and formats adds complexity and hinders interoperability. Uniform, shared data standards created by the medical community will ensure data access for everyone in the ecosystem so they can use it to solve health problems for the country in the future.
VD: What is needed to bring about this change in how health data standards and common data models are approached?
JS: The key to success will be aligning incentives across the system to benefit all stakeholders—patients, providers, payers, regulators, for-profit manufacturers, academic institutions, and scientists. People will adopt something if it helps them do things significantly better. If we arrange the structures such that all the members of the ecosystem see that outcomes are better with a new approach, then we can get people to come on board.
More specifically, we should create the expectation that likely no health system will need to change or replace its existing EHRs or systems of record. We need a system of interoperability, sitting on top of these systems, to digitize, normalize, and extract data from these systems along with the right inferences and metadata, so that different systems can talk to each other.
VD: What do you see as the big opportunity here?
JS: The quantity of data is huge. A large clinical trial current state will, at most, have around 2,000 patients because it’s too expensive and cumbersome to run beyond that. Imagine if you could tap into data for millions of patients. This would be a game changer. You could measure large chunks of the population of individuals with different conditions. It’s fundamentally about scale, and this approach scales faster and better than anything else that we’ve ever done with human biology before.
VD: Could you elaborate on the work that you are doing to drive data standards and common data models for cancer?
JS: Most of the roughly 15 million individuals living with cancer in the U.S. have EHRs of some kind. But, critical data for developing new therapies or designing personalized treatment plans are locked in systems that do not communicate with each other.
mCODE, or Minimal Common Oncology Data Elements, is a data standard for cancer, to make high-quality data for all cancer types available to all stakeholders. It’s built on the core cancer data elements that have been determined by clinical oncologists and clinical research oncologists to be critical to analyzing patient characteristics, treatments, and outcomes across patients and practices. Data from EHRs or point of care are collected and made accessible using standard interfaces and Fast Healthcare Interoperability Resources (FHIR) interfaces, which mCODE follows. This interoperability opens up the data for everyone to analyze real-world data from millions of cancer patients.
VD: What are some steps you’re taking for wider usage of mCODE?
JS: We are bringing together and building a community of organizations wanting to apply mCODE for specific use cases, through our CODEX (Common Oncology Data Elements Extensions) initiative. Already, over 200 organizations have signed up to expand the adoption of mCODE, and we are growing fast.
VD: This is massive work, and incredible progress. What role do you see for the cloud in enabling all of this?
JS: I think the cloud is essential. I don’t see any alternative to it. The variety and complexity of data, the sheer scale of it, and the computing power required to enable this for hundreds of millions of individuals is something that can only be addressed through a cloud-based approach.
VD: What are some challenges, or roadblocks, in this journey that keep you awake at night?
JS: It’s the issues around permissions, consent, ownership, and data control. Should patients provide consent for data sharing, when some patients might not fully grasp the gravity of how their data will be used? Who should own the data—is it the community, the providers, the patients, or private payers? This is important, especially with patient privacy laws and HIPAA guidelines, and when there are for-profit private organizations who are a part of the ecosystem and need this data to fuel their R&D.
Next, to make healthcare and life sciences data truly useful, we need context for it in the form of additional data such as quality of life, and social determinants of health including data on food deserts, transportation, and so on. Layering this over existing health data adds diversity and usefulness but also additional complexity. Adding these layers will make it extremely challenging to use, unless we have shared standards that permit interoperability.
VD: What is the role of the patient in all of this?
JS: Each patient is different. We need to individualize the benefits for them. For some, it is about explaining how sharing their data can help their trusted clinicians give them the best care. And, on the other end of the spectrum, we have patients with terminal cancer who want others to learn from their journey. The dynamics of who they are as individuals is critical in getting this right and making it sustainable.
VD: What does the future look like?
JS: I think it’s bright and exciting, with tremendous opportunity. Imagine you’re an individual with a clinical condition you’ve just heard about, and you and your clinician are trying to decide the best course of action. If you had access to time-series data from everybody who’s ever been in a similar situation, you could then interrogate data from patients like you, and look at their treatment plans and subsequent outcomes to determine the best way forward. And you can use cutting-edge technology like generative AI to reveal newer insights.
We need to remember that for technology to enable healthcare and life sciences in this manner, everyone needs to participate in some way because it affects all of us. Everybody who’s got some stake in this needs to be aligned and participatory. The most common issues can be solved by will and alignment.
VD: Thank you for this inspiring conversation. AWS is excited about the opportunities and positive impact for patients in aligning health data standards and share your commitment to help accelerate innovation across healthcare and life sciences.
To learn more about how AWS is helping customers innovate across healthcare and life sciences, visit https://aws.amazon.com/health/.