Bayesian Neural Networks Predicting Aleatoric and Epistemic Uncertainties

Date:

More information here

In the realm of machine learning, uncertainties pose a challenge throughout the pipeline, ranging from data generation to model training. These uncertainties can be categorized into aleatoric uncertainty and epistemic uncertainty, where the former one stems from data and cannot be eliminated; the later one originates from the model and can be reduced . Bayesian neural networks (BNNs) have emerged as a promising approach to characterize uncertainties while the computational resources of training reliable BNNs are overwhelming. Furthermore, being capable of distinguishing aleatoric uncertainty and epistemic uncertainty is another challenge when performing inference with BNNs because they are usually entangled and difficult to separate from each other. In this talk, we give an overview of the essential theory on how to train and determine uncertainties from BNNs. We also share our insights on how to reasonably disentangle aleatoric and epistemic uncertainties. To evaluate the performance of selected BNNs, we conduct a series of comparative studies on different numerical problems. These studies aim to explore the strengths and limitations of different BNN methods in predicting Quantities of Interest, various types of uncertainties, and extrapolation capabilities. Based on the comparative studies, we report our finding by highlighting the different applicability of the methods considered and proposing a potential strategy for predicting heteroscedastic uncertainties.