CSoI Seminar: Prediction and Learning by Changlong Wu
Join us March 31, 2021 4 PM (EDT) for a virtual seminar by Ph.D. candidate Changlong Wu, University of Hawaii - Manoa.
Zoom Link: https://purdue-edu.zoom.us/j/99662717547
Abstract: Using only samples from a probabilistic model, we predict properties of the model and of future observations. The prediction game continues in an online fashion as the sample size grows with new observations. After each prediction, the predictor incurs a binary (0-1) loss. The probability model underlying a sample is otherwise unknown except that it belongs to a known class of models. The goal is to make finitely many errors (i.e. loss of 1) with probability 1 under the generating model, no matter what it may be in the known model class.
Model classes admitting predictors that make only finitely many errors are eventually almost surely (eas) predictable. When the losses incurred are observable (the supervised case), we completely characterize eas predictable classes. We provide analogous results in the unsupervised case. Our results have a natural interpretation in terms of regularization. In eas-predictable classes, we study if it is possible to have a universal stopping rule that identifies (to any given confidence) when no more errors will be made. Classes admitting such a stopping rule are
easily learnable. When samples are generated i.i.d., we provide a complete characterization of ease of learnability. We also study cases when samples are not generated i.i.d., but a full characterization remains open at this point.
Bio: Changlong Wu is a Ph.D candidate at University of Hawaii at Manoa. He obtained a B.S. in Computer Science from Wuhan University in 2015. His primary interest is in algorithmic and theoretical problems in statistics, estimation theory, and machine learning.