AWS for Industries
Choice: Keeping pace with emerging models for generative AI in Life Sciences
Embracing generative AI is imperative for life sciences organizations to stay competitive. However, the rapid evolution of models and data strategies has been overwhelming. While this diverse array of choices enhances business outcomes, life sciences leaders face the daunting task of strategizing for a future amid new AI breakthroughs emerging every week.
At the recent AWS Life Sciences Leaders Symposium, Brian Loyal of AWS, and Joshua Batson of Anthropic took the stage to share guidance on these key AI ‘choices’. They outlined how to select the right AI/ML services for various use cases, and strategies to tailor AI workloads to proprietary data, including techniques like fine-tuning models for domain-specific tasks and prompt engineering — explaining the advantages and tradeoffs for each.
This conversation, led by Lisa McFerrin of AWS, highlights the key insights from the session. Watch the recording here and read the full event recap here.
Lisa McFerrin: Thank you both for taking the time to share your expertise. For context, could you give our readers a quick overview of your roles and current priorities?
Brian: I’m a principal solutions architect at AWS, focused on AI/ML solutions for the healthcare and life science industries. I’ve worked in life science for more than 20 years, doing everything from pipetting on a lab bench to leading ML Engineering teams. I spend a lot of time thinking about how to improve drug development with AI, which I believe has tremendous potential to advance human health.
Joshua: I am a research scientist at Anthropic, studying how neural networks process information and what that means for performance and safety. My training was in math, and then I worked in biomedical research (genomics, microscopy, virology) for 7 years before working on AI. Large AI models are grown (through training) more than they are programmed, and so I’m taking a biological approach to understanding how they do what they do.
Lisa: Brian, in your symposium presentation, you shared a strategy for selecting the right generative AI service. How can life sciences organizations determine the best fit for their use case and teams?
Brian: The choice depends on the use case complexity, the team’s ML proficiency, and cost. That’s why AWS offers over 40 AI/ML services, organized into tiers.
The top layer — the application layer — includes services like Amazon Q Business, a generative AI–powered assistant for answering questions, generating content, and completing tasks securely with your enterprise data. It also includes Amazon Q Apps a new way for users to create generative AI-powered apps to simplify daily tasks with just a few clicks.
The middle layer — the tooling layer — includes services like Amazon Bedrock, which offers a variety of foundation models via a single API. Customers can use these components to build secure and responsible generative AI applications to process their own data.
The bottom layer — the infrastructure layer — offers tools for cost-efficient, scalable model training for ML engineers and scientists. AWS SageMaker JumpStart provides a hub with prebuilt foundation models, algorithms, and ML solutions that you can deploy with a few clicks. This enables quick evaluation, customization, and sharing of ML artifacts within your organization.
To avoid being overwhelmed by options, I advise teams to: 1) Use managed services wherever possible, to save time and costs; 2) Choose services based on specific use cases rather than a one-size-fits-all approach; and 3) Develop team skills so builders can make informed decisions for each use case.
Lisa: That sounds like a logical approach not only to test generative AI capabilities but also to seamlessly integrate them into business workflows. You mentioned customizing generative AI workloads with proprietary data. What are the current options available to life sciences developers for that?
Brian: There are a few ways to do this.
- The first option is training, meaning to modify the weights of the model for your use case based on new data. Training can be important for cutting-edge projects like developing biological foundation models (bFM) or understanding very technical language. This is a powerful approach, but you should carefully consider your training configuration and look at parameter-efficient training methods to manage costs. We shared an example of this at re:Invent 2023, fine-tuning the ESM protein language model for a drug development task.
- If you prefer not to train the model, you can also customize your workflow by providing new information at runtime. Techniques like retrieval augmented generation (RAG) allow users to include relevant information from a document or database in model requests. This is a great way to keep the model output current, which is useful for tasks like literature reviews.
- The third option is prompt engineering. This involves tweaking how information is presented to the model to influence its responses. Off-the-shelf models like Anthropic’s Claude 3 are increasingly sophisticated, and prompt engineering can effectively guide their outputs.
Lisa: How should teams be making the choice between these three strategies?
Brian: The right strategy is one that balances desired outcomes, engineering complexity, and costs — and is tailored to your specific use case. My advice is:
- Start with prompt engineering using frontier models like Anthropic’s Claude 3 Haiku, for low-cost, accurate responses. It’s ideal for POCs and testing out ideas, at very low risk.
- Use information retrieval tools (like RAG) to introduce new information or proprietary data into your model, or to build QC checks.
- Opt for training when exploring novel use cases that push the boundaries of what’s possible with generative AI.
Lisa: That’s great advice, and a fantastic segue to Joshua. What’s possible today with prompt engineering on Claude for life sciences?
Joshua: Three top ways of using Claude that I’m seeing are:
- AI collaboration: Claude acts as a partner, bringing expertise an individual doesn’t personally have. Claude acts as a programmer, research assistant, regulatory affairs support, or international translator, assisting biologists in tasks beyond their areas of expertise or those that are undifferentiated heavy-lifting.
- Data transformation: Claude converts structured data into unstructured formats and unstructured data into structured data, turns image data into text, and facilitates cross-language data integration. Models are good at recognizing and synthesizing different styles, syntax, and tones, across domains and modalities.
- Knowledge synthesis: Claude can read long documents very quickly, and locate patterns or errors which would be quite laborious for a human. Recognizing inconsistencies in a 100 page text or identifying patterns among thousands of genes is hard for a person – you can’t hold it all in your head at once – but Claude’s context window can fit it all.
These ways of using Claude apply in many domains, from doctors summarizing EHR data to clinical trial managers correcting documentation inconsistencies, to biologists processing single-cell genomics data.
Lisa: What are some of the fundamental reasons that make Claude great for life sciences?
Joshua: We’ve prioritized several key features to make Claude the leading and most trustworthy frontier model for life sciences, like:
- Comprehensive breadth of knowledge across science, health, law, languages, code, finance and more, with multimodal capabilities (such as image and graph analysis).
- Large and good working memory to generate responses with near-perfect recall and industry-leading accuracy (vetted by third parties)—crucial in life sciences where precision is essential.
- Fastest and the most cost-effective model in its intelligence class.
- Safety by design. We pioneered “constitutional AI” to uphold principles of ethics, increase reliability and utility, and promote fairness, earning recognition as one of 3 Most Important AI Innovations of 2023 by Time Magazine
- Top-tier security, preventing misuse of AI models and safeguarding sensitive health data to minimize enterprise risks. It consistently ranks high in safety evaluations, being over 10 times more resistant to jailbreak techniques than competitors and topping safety leaderboards.
These capabilities are non-negotiable for life sciences, meeting the industry’s demands for high precision, domain-specific expertise, and adherence to strict regulatory standards.
Lisa: Amazing. As we close, how do you envision AWS and Anthropic working together to shape the future of AI in life sciences?
Joshua: With Anthropic’s Claude models available on Amazon Bedrock, life sciences organizations can create tailored generative AI solutions with the right balance of intelligence, speed, and cost. This integration allows them to co-locate models and data, centralizing AI development to where the data resides. Moreover, Bedrock’s supplementary services to customize Claude further simplifies the building process.
Second, AWS’s top-tier security aligns with our core commitment to AI safety, ensuring your AI operations are well-protected.
We are strong partners, and together we equip teams with models, engineering best practices, and implementation guidelines, from POCs to production.
Lisa: Thanks Joshua and Brian.
Generative AI is revolutionizing life sciences organizations, delivering an impact on par with historical breakthroughs like electricity and the internet. With 2024 dubbed as the “Year of Production” for generative AI in life sciences, our team at AWS is prepared to help you swiftly deploy your high-impact use cases, while maintaining a commitment to ethical and responsible AI practices, alongside our partners like Anthropic.
Explore the path to production by visiting our website today.