Posted On: Dec 19, 2022

We are excited to announce the general availability of Fortuna, an open-source library for uncertainty quantification of ML models. Fortuna provides calibration methods, such as conformal prediction, that can be applied to any trained neural network to obtain calibrated uncertainty estimates. The library further supports a number of Bayesian inference methods that can be applied to deep neural networks written in Flax.

Accurate estimation of predictive uncertainty is crucial for applications that involve critical decisions. Uncertainty allows us to evaluate the reliability of model predictions, defer to human decision makers, or determine if a model can be safely deployed. The library makes it easy to run benchmarks and will enable practitioners to build robust and reliable AI solutions by taking advantage of advanced uncertainty quantification techniques.

To learn about the library, check out our blog post. To get started with Fortuna, you can consult the following resources:

GitHub repository
Official documentation
Examples of Fortuna’s usage