2021年秋季《可信智能软件》讨论班面向北京大学数学科学学院本科生开设。

时间:每周一上午8:10-9:50 am。

地点:理科一号楼1304。

近年来,深度学习等智能软件系统越来越多地被应用于自动驾驶、医疗诊断等诸多安全攸关领域之中。安全攸关领域对系统功能的可信性、鲁棒性、安全性等性质有着非常严格的需求,一旦出现错误将导致巨大损失,而这类系统本身全局决策鲁棒性与可解释性的缺失,以及对于可信性、安全性保障的缺乏,成为了它们应用于安全攸关领域的主要技术瓶颈。如何保障大规模复杂智能软件系统的正确运行,是数学、计算机科学、人工智能等多个领域面临的科学难题和重大挑战。本讨论班将组织同学阅读相关领域的前沿文献并进行报告,参加讨论班的同学需具有一定的相关知识基础(机器学习 / 深度学习 / 神经网络等),并对数理逻辑、自动机理论有一定的了解,感兴趣的同学可在讨论班结束后选择某个具体问题开展研究。

 

日期(Date)

地点(Place)

报告题目(Title)

报告人 (Speaker)

1
2021.10.11
1304
Introduction
孙猛

2

2021.10.18
1304
Formal Modeling and Verification
孙猛

3

2021.10.25
1304
Introduction to Testing
孙猛

4

2021.11.1
1304
Automata Extraction
1. 栾晓坤,2. 邵凯诚

5

2021.11.8
1304
Scalable Verification
1. 朱宴余,2. 马允轩

6

2021.11.15
1304
Abstraction-refinement and symbolic approaches
1. 周航,2. 栾晓坤

7

2021.11.22
1304
SAT/SMT-based Verification
1. 朱宴余,2. 邵凯诚

8

2021.11.29
1304
Abstract Interpretation
1. 2.

9

2021.12.6
1304
Approximation and optimization
1. 2.

10

2021.12.12
1304
Coverage
1. 2.

11

2021.12.20
1304
Mutation testing and concolic testing
1. 2.

12

2021.12.27
1304
Fuzzing
1. 2.

报告可选论文:

1. Automata extraction:

[1] Zhiwu Xu, Cheng Wen, Shengchao Qin, Mengda He. Extracting automata from neural networks using active learning. PeerJ Computer Science, vol. 7: e436, 2021.

[2] Mohammadhosein Hasanbeig, Natasha Yogananda Jeppu, Alessandro Abate, Tom Melham, Daniel Kroening. DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement Learning. Proceedings of AAAI 2021, pages 7647-7656, 2021.

[3] Xiyue Zhang, Xiaoning Du, Xiaofei Xie, Lei Ma, Yang Liu, Meng Sun. Decision-Guided Weighted Automata Extraction from Recurrent Neural Networks. Proceedings of AAAI 2021, pages 11699-11707, 2021.

[4] Gail Weiss, Yoav Goldberg, Eran Yahav. Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples. Proceedings of ICML 2018, pages 5244-5253.

2. Scalable verification of neural networks:

[1] Thomas A. Henzinger, Mathias Lechner, Dorde Zikelic. Scalable Verification of Quantized Neural Networks. Proceedings of AAAI 2021, pages 3787-3795, AAAI Press, 2021.

[2] Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Marian Dan, Martin T. Vechev. Scalable Polyhedral Verification of Recurrent Neural Networks. Proceedings of CAV 2021, LNCS 12759, pages 225-248, Springer, 2021.

[3] Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel, Prateek Saxena. Scalable Quantitative Verification For Deep Neural Networks. Proceedings of ICSE 2021, pages 312-323, IEEE, 2021.

3. Abstraction-refinement and Symbolic approaches:

[1] Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz. An Abstraction-Based Framework for Neural Network Verification. Proceedings of CAV 2020, LNCS 12224, pages 43-65, Springer, 2020.

[2] Guoliang Dong, Jingyi Wang, Jun Sun, Yang Zhang, Xinyu Wang, Ting Dai, Jin Song Dong, Xingen Wang. Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction. Proceedings of ASE 2020, pages 499-510, IEEE, 2020.

[3] Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, Lijun Zhang. Improving Neural Network Verification through Spurious Region Guided Refinement. Proceedings of TACAS 2021 (1), LNCS 12651, pages 389-408, Springer, 2021.

[4] Luca Pulina, Armando Tacchella. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. Proceedings of CAV 2010, LNCS 6174, pages 243-257, Springer, 2010.

[5] Nicolas Berthier, Amany Alshareef, James Sharp, Sven Schewe, Xiaowei Huang. Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian Approximation of Hidden Features. CoRR abs/2103.03704, 2021.

[6] Jianlin Li, Jiangchao Liu, Pengfei Yang, Liqian Chen, Xiaowei Huang, Lijun Zhang. Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification. Proceedings of SAS 2019, LNCS 11822, pages 296-319, Springer, 2019.

4. SAT/SMT-based verification:

[1] Guy Amir, Haoze Wu, Clark W. Barrett, Guy Katz. An SMT-Based Approach for Verifying Binarized Neural Networks. Proceedings of TACAS 2021 (2), LNCS 12652, pages 203-222, Springer, 2021.

[2] Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh. Verifying Properties of Binarized Deep Neural Networks. Proceedings of AAAI 2018, pages 6615-6624. AAAI Press, 2018.

[3] Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. Proceedings of CAV 2017, LNCS 10426, pages 97–117. Springer, 2017.

5. Abstract Interpretation:

[1] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin T. Vechev. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. Proceedings of S&P 2018, pages 3-18, IEEE Computer Society, 2018.

[2] Matthew Mirman, Timon Gehr, Martin T. Vechev. Differentiable Abstract Interpretation for Provably Robust Neural Networks. Proceedings of ICML 2018, pages 3575-3583, PMLR 80, 2018.

6. Approximation and optimization:

[1] Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska. A game-based approximate verification of deep neural networks with provable guarantees. Theoretical Computer Science, vol. 807, pages 298-329, 2020.

[2] Yiting Wu, Min Zhang. Tightening Robustness Verification of Convolutional Neural Networks with Fine-Grained Linear Approximation. Proceedings of AAAI 2021, pages 11674-11681, AAAI Press, 2021.

[3] Cecilia Summers, Michael J. Dinneen. Nondeterminism and Instability in Neural Network Optimization. Proceedings of ICML 2021, PMLR 139:9913-9922, 2021.

[4] János D. Pintér. Calibrating artificial neural networks by global optimization. Expert Systems with Applications, vol. 39(1), pages 25-32, 2012.

6. Coverage:

[1] Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, Rob Ashmore. Structural Test Coverage Criteria for Deep Neural Networks. ACM Transactions on Embedded Computing Systems, volume 18, 94:1-94:23, 2019.

[2] Fabrice Harel-Canada, Lingxiao Wang, Muhammad Ali Gulzar, Quanquan Gu, Miryung Kim. Is neuron coverage a meaningful measure for testing deep neural networks? Proceedings of ESEC/SIGSOFT FSE 2020, pages 851-862, ACM, 2020.

[3] Shenao Yan, Guanhong Tao, Xuwei Liu, Juan Zhai, Shiqing Ma, Lei Xu, Xiangyu Zhang. Correlations between deep neural network model coverage criteria and model quality. Proceedings of ESEC/SIGSOFT FSE 2020, pages 775-787, ACM, 2020.

8. Mutation testing and concolic testing:

[1] Lorenz Klampfl, Nour Chetouane, Franz Wotawa. Mutation Testing for Artificial Neural Networks: An Empirical Evaluation. Proceedings of QRS 2020, pages 356-365, IEEE, 2020.

[2] Jingyi Wang, Guoliang Dong, Jun Sun, Xinyu Wang, Peixin Zhang. Adversarial sample detection for deep neural network through model mutation testing. Proceedings of ICSE 2019, pages 1245-1256, IEEE/ACM, 2019.

[3] Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening. Concolic testing for deep neural networks. Proceedings of ASE 2018, pages 109-119, ACM, 2018.

9. Fuzzing:

[1] Augustus Odena, Catherine Olsson, David G. Andersen, Ian J. Goodfellow. TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing. Proceedings of ICML 2019, pages 4901-4911, 2019.

[2] Xiaofei Xie, Hongxu Chen, Yi Li, Lei Ma, Yang Liu, Jianjun Zhao. Coverage-Guided Fuzzing for Feedforward Neural Networks. Proceedings of ASE 2019, pages 1162-1165, 2019. (complete version on arxiv.org)

[3] Jianmin Guo, Yu Jiang, Yue Zhao, Quan Chen, Jiaguang Sun. DLFuzz: differential fuzzing testing of deep learning systems. Proceedings of ESEC/SIGSOFT FSE 2018, pages 739-743, ACM, 2018. (complete version on arxiv.org)