Minimax Risk Bounds for Piecewise Constant Models
主 题: Minimax Risk Bounds for Piecewise Constant Models
报告人: Fang Han, (University of Washington)
时 间: 2016-12-15 9：00-10：00
地 点: 理科一号楼1303
Consider a sequence of data points $X_1,\dots,X_n$ whose underlying mean $\theta^*$ is piecewise constant of at most $k^*$ unknown pieces. We establish sharp nonasymptotic risk bounds for the least squares estimator (LSE) on estimating $\theta^*$. The main results are twofold. First, when there is no additional shape constraint assumed, we reveal a new phase transition for the risk of LSE: As $k^*$ increases from 2 to higher, the rate changes from $\log\log n$ to $k^*\log(en/k^*)$. Secondly, when $\theta^*$ is further assumed to be nondecreasing, we show the rate is improved to be $k^*\log\log(16n/k^*)$ over $1\le k^*\le n$. These bounds are sharp in the sense that they match the minimax lower bounds of the studied problems (without sacrificing any logarithmic factor). The obtained result complements its counterpart in the change-point detection literature, reveals an interesting phenomenon that is in contrast to Kiefer’s Theorem (Kiefer, 1982), and fills some notable gaps in recent discoveries relating isotonic regression to piecewise constant models. The developed techniques in the proofs are built on Levy’s partial sum and Doob’s martingale theory, which are of independent interest and may have potential applications to the study of some other shape-constrained regression problems. This is a collaboration work with Dr. Chao Gao and Dr. Cun-Hui Zhang.