Artificial Intelligence

Samantha Stuart

Author: Samantha Stuart

A diagram showing a generation chain followed by a judge chain which intelligently routes requests back if required for re-ranking

Ground truth generation and review best practices for evaluating generative AI question-answering with FMEval

In this post, we discuss best practices for applying LLMs to generate ground truth for evaluating question-answering assistants with FMEval on an enterprise scale. FMEval is a comprehensive evaluation suite from Amazon SageMaker Clarify, and provides standardized implementations of metrics to assess quality and responsibility. To learn more about FMEval, see Evaluate large language models for quality and responsibility of LLMs.

Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEval

Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEval

In this post, we discuss best practices for working with Foundation Model Evaluations Library (FMEval) in ground truth curation and metric interpretation for evaluating question answering applications for factual knowledge and quality.