Time: 14:30-15:30pm, Nov. 22th, 2018, Monday
Venue: RoomA1716, Science Building, North Zhongshan Road Campus
Spreker: Full Professor Weidong Liu, Institute of Natural Sciences, Shanghai Jiao Tong University
Abstract: This paper studies the inference problem in quantile regression (QR) for a large sample size $n$ but under a limited memory constraint, where the memory can only store a small batch of data of size $m$. A natural method is the na\ive divide-and-conquer approach, which splits data into batches of size $m$, computes the local QR estimator for each batch, and then aggregates the estimators via averaging. However, this method only works when $n=o(m^2)$ and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically, as long as $n$ grows polynomially in $m$, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality $p$ goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data.
Speaker’s Bio:刘卫东,上海交通大学自然科学研究院,教授。2008年在浙江大学获博士学位,2008-2011年在香港科技大学和美国宾夕法尼亚大学沃顿商学院从事博士后研究。 2018年获得国家杰出青年科学基金。研究领域为高维数据的统计推断,在统计学四大顶级期刊 AOS,JASA,JRSSB,Biometrika 和概率论顶级期刊 AOP,PTRF等发表四十余篇论文。