Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Hierarchize Pareto Dominance in Multi-Objective Stochastic Linear Bandits

Published in Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI-24), 2024

This paper introduces mixed Pareto-lexicographic orders for multi-objective stochastic linear bandits, allowing algorithms to capture both Pareto and lexicographic preferences via the Grossone methodology.

Recommended citation: Cheng, J., Xue, B., Yi, J., & Zhang, Q. (2024). “Hierarchize Pareto Dominance in Multi-Objective Stochastic Linear Bandits.” *Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI-24)*, pp. 11489–11497.
Download Paper | Download Bibtex

Practical Multi-fidelity Machine Learning: Fusion of Deterministic and Bayesian Models

Published in arXiv preprint, 2024

A practical multi-fidelity strategy for problems spanning low- and high-dimensional domains, integrating a non-probabilistic regression model for the low-fidelity with a Bayesian model for the high-fidelity.

Recommended citation: Yi, J., Cheng, J., & Bessa, M. (2024). “Practical multi-fidelity machine learning: fusion of deterministic and Bayesian models.” arXiv preprint arXiv:2407.15110.
Download Paper | Download Bibtex

Cooperative Bayesian and Variance Networks Disentangle Aleatoric and Epistemic Uncertainties

Published in arXiv preprint, 2025

A simple yet effective cooperative training strategy that integrates a Variance estimation network with a Bayesian neural network, achieving accurate mean prediction while disentangling aleatoric and epistemic uncertainties.

Recommended citation: Yi, J. & Bessa, M. (2025). “Cooperative Bayesian and Variance Networks Disentangle Aleatoric and Epistemic Uncertainties.” arXiv preprint arXiv:2505.02743.
Download Paper | Download Bibtex

Single-to-multi-fidelity history-dependent learning with uncertainty quantification and disentanglement

Published in arXiv preprint, 2025

This work generalizes data-driven learning to history‐dependent multi-fidelity settings, enabling uncertainty quantification and disentanglement of model vs noise.

Recommended citation: Yi, J., Ferreira, B. P., & Bessa, M. A. (2025). “Single- to multi-fidelity history-dependent learning with uncertainty quantification and disentanglement.” arXiv preprint arXiv:2507.13416.
Download Paper | Download Bibtex

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.