Data: 27 Ottobre 2021
Luogo: Aula 5016 (Lab LM). Badge required for in-person attendance. Admittance not allowed if max capacity is exceeded. A link for remote attendance will be provided in a reminder message a few days before the talk.
: Nikita Zhivotovskiy (ETH)
Abstract: We discuss how adding an option to abstain can significantly speed up the convergence rates in online and active learning with minimal/no assumptions on the data generating mechanism. For example, we show that in pool-based active classification without assumptions on the underlying distribution, if the learner is given the power to abstain from some predictions by paying the price marginally smaller than the average loss 1/2 of a random guess, exponential savings in the number of label requests are possible whenever they are possible in the corresponding realizable problem. Similarly, in sequential prediction with a finite number of experts, the abstention option allows for constant regret even though the loss is not mixable and the setup is not realizable. The talk is based on a series of works: arXiv:1910.12756, 2001.10623, 2102.00451
Bio sketch: Nikita Zhivotovskiy is a postdoctoral researcher at the department of mathematics ETH, Zürich. Before that he was a postdoctoral researcher at Google Research, Brain Team, Zürich, and a visitor at the department of mathematics, Technion. His main interests are in the intersection of mathematical statistics, probability and learning theory.
Persona di riferimento: Nicolò Cesa-Bianchi