报告人：Pengfei Yang (Institute of Software, Chinese Academy of Sciences)
地点：Room 1304, Sciences Building No. 1
To analyse local robustness properties of deep neural networks (DNNs), we present a practical framework from a model learning perspective. Based on black-box model learning with scenario optimisation, we abstract the local behaviour of a DNN via an affine model with the probably approximately correct (PAC) guarantee. From the learned model, we can infer the corresponding PAC-model robustness property. The innovation of our work is the integration of model learning into PAC robustness analysis: that is, we construct a PAC guarantee on the model level instead of sample distribution, which induces a more faithful and accurate robustness evaluation. This is in contrast to existing statistical methods without model learning. In the experimental evaluation, our method outperforms the state-of-the-art statistical method PROVERO, and it achieves more practical robustness analysis than the formal verification tool ERAN.
Pengfei Yang is working as a post-doc in Institute of Software, Chinese Academy of Sciences. He mainly works on AI safety and probabilistic model checking. In the domain of AI safety, he proposes varieties of methods including DeepSymbol, DeepLip, DeepSRGR, and DeepPAC, which corresponds to symbolic propagation in abstract interpretation, verification through Lipschitz constants, spurious regions guided refinement, and PAC-model learning based verification technique, respectively, and he also participates in developing the first Chinese platform for DNN verification --- PRODeep. Besides these, Pengfei Yang is also interested in probabilistic model checking, probabilistic programs, quantum compuatation and reinforcement learning.