Single-to-multi-fidelity history-dependent learning with uncertainty quantification and disentanglement

Published in arXiv preprint, 2025

Data-driven learning is generalized to consider history-dependent multi-fidelity data, while quantifying epistemic uncertainty and disentangling it from data noise (aleatoric uncertainty). This generalization is hierarchical and adapts to different learning scenarios: from training the simplest single-fidelity deterministic neural networks up to the proposed multi-fidelity variance estimation Bayesian recurrent neural networks. The proposed methodology is demonstrated by applying it to different data-driven constitutive modeling scenariosfor history-dependent plasticity of elastoplastic biphasic materials that include multiple fidelities with and without aleatoric uncertainty (noise). The method accurately predicts the response and quantifies model error while also discovering the noise distribution (when present). The versatility and generality of the proposed method open opportunities for future real-world applications in diverse scientific and engineering domains; especially, the most challenging cases involving design and analysis under uncertainty.

Recommended citation: Yi, J., Ferreira, B. P., & Bessa, M. A. (2025). “Single- to multi-fidelity history-dependent learning with uncertainty quantification and disentanglement.” arXiv preprint arXiv:2507.13416.
Download Paper | Download Bibtex