机器学习实验室博士生系列论坛（第八期）—— Out-of-Distribution Generalization via Invariant Learning
报告人：Boya Zhang (PKU)
地点：北大静园六院一楼大会议室&腾讯会议498 2865 6467
Abstract: Distributional shifts between testing and training data are usually inevitable. How to improve the out-of-distribution (OOD) generalization ability and stable performances of machine learning algorithms is of key importance. In many real problems, the difficulty in generalizing out of distribution will lie in figuring out which correlations in the data are spurious and unreliable, and which ones are invariant across different environments. Causality has often been described as having the core property of invariance under intervention. Therefore, we can leverage tools from causation to define the invariant correlations. Invariant learning methods propose to exploit the causally invariant correlations across multiple training environments, resulting in out-of-distribution (OOD) optimal predictors. In this talk, we will first introduce the relationship between causality, invariance and generalization briefly. Then we will give an overview on how to improve out-of-distribution generalization performance with different invariant learning methods.