Skip to content

Latest commit

 

History

History
56 lines (56 loc) · 2.1 KB

2024-09-12-dong24b.md

File metadata and controls

56 lines (56 loc) · 2.1 KB
title abstract openreview software section layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains
Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.
SlxO1NpLiE
Papers
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
dong24b
0
Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains
1176
1188
1176-1188
1176
false
Dong, Hailiang and Amato, James and Gogate, Vibhav and Ruozzi, Nicholas
given family
Hailiang
Dong
given family
James
Amato
given family
Vibhav
Gogate
given family
Nicholas
Ruozzi
2024-09-12
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence
244
inproceedings
date-parts
2024
9
12