Artificial Intelligence
Track LLM model evaluation using Amazon SageMaker managed MLflow and FMEval
In this post, we show how to use FMEval and Amazon SageMaker to programmatically evaluate LLMs. FMEval is an open source LLM evaluation library, designed to provide data scientists and machine learning (ML) engineers with a code-first experience to evaluate LLMs for various aspects, including accuracy, toxicity, fairness, robustness, and efficiency.
Securing MLflow in AWS: Fine-grained access control with AWS native services
June 2024: The contents of this post are out of date. We recommend you refer to Announcing the general availability of fully managed MLflow on Amazon SageMaker for the latest. With Amazon SageMaker, you can manage the whole end-to-end machine learning (ML) lifecycle. It offers many native capabilities to help manage ML workflows aspects, such […]
Organize your machine learning journey with Amazon SageMaker Experiments and Amazon SageMaker Pipelines
The process of building a machine learning (ML) model is iterative until you find the candidate model that is performing well and is ready to be deployed. As data scientists iterate through that process, they need a reliable method to easily track experiments to understand how each model version was built and how it performed. […]
Track your ML experiments end to end with Data Version Control and Amazon SageMaker Experiments
Data scientists often work towards understanding the effects of various data preprocessing and feature engineering strategies in combination with different model architectures and hyperparameters. Doing so requires you to cover large parameter spaces iteratively, and it can be overwhelming to keep track of previously run configurations and results while keeping experiments reproducible. This post walks […]



