报告人：Dr. Bai Xue（Institute of Software, Chinese Academy of Sciences）
地点：Room 1303, Science Building No.1
Abstract: Given a family of independent and identically distributed samples extracted from the input region and their corresponding outputs, in this paper we propose a method to under-approximate the set of safe inputs that lead the black-box system to respect a given safety specification. Our method falls within the framework of probably approximately correct (PAC) learning. The computed under-approximation comes with statistical soundness provided by the underlying PAC learning process. Such a set, which we call a PAC under-approximation, is obtained by computing a PAC model of the black-box system with respect to the specified safety specification. In our method, the PAC model is computed based on the scenario approach, which encodes as a linear program. The linear program is constructed based on the given family of input samples and their corresponding outputs. The size of the linear program does not depend on the dimensions of the state space of the black-box system, thus providing scalability. Moreover, the linear program does not depend on the internal mechanism of the black-box system, thus being applicable to systems that existing methods are not capable of dealing with. Some case studies demonstrate these properties, general performance and usefulness of our approach.
Bio: Dr. Bai Xue is an associate research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences since November, 2017. He received the B.Sc. degree in information and computing science from Tianjin University of Technology and Education in 2008, and the Ph.D. degree in applied mathematics from Beihang University in 2014. Prior to joining Institute of Software, he worked as a research fellow in the Centre for High Performance Embedded Systems at Nanyang Technological University from May, 2014 to September, 2015, and as a postdoc in the Department fuer Informatik at Carl von Ossietzky Universitaet Oldenburg from November, 2015 to October, 2017. His research interests mainly involves, but are not limited to, safety verification of (time-delay/stochastic) hybrid systems and safety verification of artificial intelligence.