I’m interested in algorithms able to accumulate transferable knowledge or skills enabling generalization to future tasks. Accordingly, my research topics lie in continual learning and meta-learning. My recent work proposes a new and more realistic approach to continual learning at the intersection of both fields.
Recently, I have developed a particular interest in the idea of composing existing skills to learn new ones quickly. I believe this is the real appeal of continual learning and that it can propel reinforcement learning. Consequently, i’m currently focused on continual RL.
09/2020 My work proposing a new approach to continual learning (OSAKA) as been accepted at NeurIPS 2020!
09/2020 Our work proposing a synthetic dataset generator (Synbols) to probe different learning algorithms as been accepted at NeurIPS 2020!
06/2020 I hosted a panel with Chelsea Finn, Chris Kanan and Subutai Ahmad at our Continual Learning workshop at CVPR 2020! You can find it here
12/2019 Our workshop on Continual Learning at CVPR2020 as been accepted! Watch out for our really cool competition.
09/2019 My work on Online Continual learning with Maximal Interfered Retrieval was accepted NeurIPS 2019! You can find a 8-min video summary here