Logo Università degli Studi di Milano


 

 
 
Notizie  

Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees

Data: 2 ottobre 2019 Ore: 14:30

Luogo: Sala Riunioni 7 piano, via Celoria 18

Speaker: Giulia Denevi, Italian Institute of Technology, Genova

Persona di riferimento: Nicolò Cesa-Bianchi

Abstract

We study the problem of learning a series of tasks in a fully online Meta-Learning or Lifelong Learning setting. The goal is to exploit similarities among the tasks to incrementally adapt an inner online algorithm in order to incur a low averaged cumulative error over the tasks. We focus on a family of inner algorithms based on a parametrized variant of online Mirror Descent aiming at minimizing the within-task regularized empirical risk. The inner algorithm is incrementally adapted by an online Mirror Descent meta-algorithm using the within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we approximate the meta-subgradients by the online inner algorithm. An upper bound on the approximation error allows us to derive a cumulative error bound for the proposed method. Our analysis can also be converted to the statistical setting by online-to-batch arguments. We instantiate two examples of the framework in which the parameter of the inner algorithm is either a bias vector or a common feature map. Finally, preliminary numerical experiments confirm our theoretical findings.

Bio sketch

Giulia Denevi is a PhD student at the Computational Statistics and Machine Learning Department at the Italian Institute of Technology and the Department of Mathematics at the University of Genoa. She received a MSc in Applied Mathematics from the University of Genova in 2016. Her research interests are in the area of Meta-Learning, Lifelong Learning, Online Learning, Statistical Learning Theory and Optimization.

 

02 settembre 2019
Torna ad inizio pagina