AWS for Industries

Executive Conversations: The promise of generative AI for the commercial pharma value chain

Merck & Co., Inc., Rahway, NJ, USA, one of the largest pharmaceutical companies in the world, has been using generative artificial intelligence (generative AI) to push the borders of data science and analytics in life sciences. Harish Nankani, Associate Vice President, Data and Analytics, Merck, and Suman Giri, Executive Director, Data Science, Merck, recently spoke to Ujjwal Ratan, Data Science and ML Leader for Healthcare and Life Sciences at AWS, on the promise and potential of this groundbreaking technology for commercial teams in life sciences organizations.

This Executive Conversation is part of a series of discussions held with leaders who are pushing the frontiers of the healthcare and life sciences industry with AWS technology.


Ujjwal Ratan (UR): Welcome. To get us started, could you share more about your role at Merck and your current priorities?

Harish Nankani (HN): As the AVP of Data Analytics for IT, I am a part of Merck’s Information Technology team supporting the Human Health Global Commercial organization. My team is focused on building the technology platform, as well as managing critical data functions to drive insights and decision-making for Merck globally.

Suman Giri (SG): I lead global commercial data science for Merck and my team drives data science and machine learning (ML) use cases to support the full commercial pharma value chain. This includes everything from launching a new medicine or vaccine, to recommending the right actions for our sales force, to determining the right patient and provider targets for our therapies.

UR: We’ve heard about the potential of AI/ML in pharma R&D. I am curious to learn how commercial teams at Merck are using it?

HN: In the past, we were using ML techniques to validate things we already knew, through one-off experiments. That is shifting now, though we are still in our early days. Today, we are using AI/ML for mining insights from structured or unstructured data—like analyst reports or drug filing data—to better shape our strategy and guide our roadmap. The other area where we are using AI/ML is data governance.

SG: We use a lot of analytics for decision support and to generate insights for our sales and marketing teams. ML comes in when we need to produce recommendations or predictions, for example, to guide our sales force on what the next optimal action should be. ML also helps us expand inferences to situations where data is inadequate—such as identifying target patient populations for rare diseases or understanding patient journeys when a lot of clinical data is missing. In the future, I see ML playing a big role in measuring marketing impact and doing closed-loop marketing more effectively.

UR: Fascinating. Let’s talk about the buzzword we all hear today, generative AI. What are the most promising applications of the technology for commercial teams at Merck?

SG: Knowledge mining and management are the lowest hanging fruits. Generative AI can help us mine information from primary market research reports, for example. Other promising use cases include: 1/competitive intelligence; 2/ streamlining medical, legal and regulatory review processes; and, 3/ understanding the approval likelihood of regulatory filings. One use case that excites me the most is hyper-personalizing content for health care providers by understanding their preferences and following through with the right engagement strategy. There are other use cases around explainability and transparency, which are exciting.

HN: We’re starting to see increased demand with use of large language models (LLMs) and generative AI. Suman described many of these, and these capabilities will be helpful in making our commercial and technology organizations be more productive. While these applications look appealing, the real challenge lies in integrating them in our business processes. This technology can only be a game-changer if it offers a differentiated outcome and helps us become faster and more efficient—not if we use it to prove something we already know.

UR: What are your thoughts on training smaller LLMs versus using a very large LLM like a Falcon 40B or Anthropic?

SG: It depends on your use case. To answer the “what” questions, like those that we use in primary research, we create an architecture for using a base model and prompt it smartly using a conversational interface. That’s 30-40 percent of our use cases, and we’re using Amazon Bedrock to build these capabilities quickly with AWS.

On the other hand, for answering “how” questions—like “How have things evolved in the past 10 years?”—we need to do supervised fine-tuning and update the weights on a base model. Architectural considerations become a lot more important at this point. You need graphics processing units (GPUs) at the backend, constant updating of model weights in response to new information, and built-in prompting. That is phase two for us and it will be the goldmine.

HN: Ultimately, it all depends on the training data available, and the model’s capability to adapt and learn. Machines on their own are not yet at a point where they can do this by themselves. The bottleneck I see is having good quality training data that will require sufficient knowledgeable resources to train.

UR: This is a great segue to my next question—how should organizations design their data systems to take advantage of these technologies?

HN: I like my rule of three. First, give data users a configurable environment to get started—personalize the experience for different data users. Second, democratize the data, but with guardrails. It’s important to have processes for quickly onboarding data and make it easily accessible and searchable. But, at the same time, we must consider controlled access, as we may be dealing with sensitive data. Third, determine a strategy to generate value from unstructured data by effectively storing, indexing, and qualifying it to train models. This is going to be critical as the generative AI and the LLM journey pick up.

SG: Historically, health care as an industry has been putting off scaled management of unstructured data, but now the imperative to address it is a lot stronger. Unstructured data—even in electronic health records (EHRs)—has progressed from text-only to include video, audio and images. Tomorrow, we might want to build models of recorded customer calls, so we need to prepare for that future. This means understanding the technology, privacy and legal implications for health care companies.

UR: What are some guardrails that you have in place for leveraging generative AI?

SG: We’re aligning technology to our business goals because the risks are too high if anything goes wrong. We are analyzing every use case objectively to ensure significant benefit to justify our risk exposure.

Second, it is important to track experiments somewhere and maintain a record of all parameters tuned and the resulting outputs. This ensures it’s a repeatable process with a different model in case it didn’t work for the first time.

HN: Auditability and explain-ability will be critical. For example, as part of our commitment to safely and responsibly using this technology, we need to be able to demonstrate that the 100,000 personalized emails for health care providers we created using GenAI do not share any patient information or state an unapproved claim. As models become larger with trillions of parameters, it is going to get harder to explain results from them.

UR: How will you measure the success of your generative AI projects for the commercial organization?

HN: First, we’re going to measure the impact on business outcomes, customers and patients reached, customer experience, etc. Second, we’re going to measure activity—checking if customers feel more connected to us as a company. Finally, we will measure efficiency—checking if it’s making our processes more efficient and getting our medicines and vaccines to the market faster.

SG: It’s still early days and we don’t yet have clear benchmarks for success in place—and it’s too early to constrain it by metrics. As long as there is a business rationale and it adheres to our standards for ethical use of AI, we should be good. For example, using generative AI during our review of marketing content to flag potential risks can help us shrink the process from weeks to a few hours. If that’s possible, we’re talking real benefit, both in terms of efficiency and sales.

UR: What’s the collaboration with AWS been like around AI/ML?

HN: AWS is our organizational standard for core infrastructure and tooling capabilities, and we rely on AWS for most of our data systems. We leverage foundational AI/ML capabilities from AWS, including Amazon SageMaker, Amazon Textract, and now, Amazon Bedrock.

For example, we are building generative AI capabilities in our U.S. patient level analytics workflows. In other divisions in our company, using AWS services, we can predict patient journeys and understand disease progression by integrating anonymized internal and third-party data, subject to conditions included in the patients’ consent at the time of collecting the data. Not only will this help accelerate R&D, but it will also optimize our go-to-market strategies.

We are also working with AWS to bring us closer to our data partners, close gaps in data sharing, and build our data governance ecosystem for responsible generative AI.

SG: I love the way our relationship with AWS has evolved. A key highlight for my team was an Amazon SageMaker training session we did. The degree of collaboration made me feel like we were operating as a single team. That level of give and take is valuable to both Merck and AWS, and we have the potential to deepen it.

UR: What is the one aspect of generative AI that you have high hopes for?

HN: For me, it’s faster experimentation to drive value—whether that looks like better patient care or faster access.

SG: Drug discovery, particularly if we can solve the problem of protein binding. More broadly speaking, I’m most hopeful for the agent construct in generative AI so that autonomous agents can plan a set of activities and launch a series of actions, but it feels a bit futuristic given our current regulations.

UR: Thank you for this fascinating conversation today. AWS is striving to drive value through generative AI for organizations of all sizes, and enable breakthroughs for health care and life sciences.


See how Merck is innovating with generative AI on AWS across other areas of their value chain.

To learn more about how AWS is helping customers innovate across healthcare and life sciences, visit https://aws.amazon.com/health/

Ujjwal Ratan

Ujjwal Ratan

Ujjwal Ratan is a Principal Machine Learning Specialist in the Global Healthcare and Lifesciences team at Amazon Web Services. He works on the application of machine learning and deep learning to real world industry problems like medical imaging, unstructured clinical text, genomics, precision medicine, clinical trials and quality of care improvement. He has expertise in scaling machine learning/deep learning algorithms on the AWS cloud for accelerated training and inference. In his free time, he enjoys listening to (and playing) music and taking unplanned road trips with his family.

Harish Nankani

Harish Nankani

Harish Nankani is the Associate Vice President, Commercial Data & Analytics, at Merck. He is a high-energy data & analytics leader with a deep understanding of life sciences commercial processes, data, metrics, and outcomes.

Suman Giri

Suman Giri

Suman Giri is the global head of commercial data science at Merck. As part of his role, he leads a globally distributed team that works on use-cases and capabilities across the commercial pharma value chain for all of Merck’s therapeutic areas and markets. He is passionate about designing responsible ML-systems and solutions that are built for scale and with end-user experience in mind. He holds a PhD in Advanced Infrastructure Systems from Carnegie Mellon University and degrees in Physics and Mathematics from Oberlin College.