About me
I recently completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under the guidance of Professor Laurent Charlin. My academic journey has been marked by collaborations with globally renowned institutions, including DeepMind, where I worked within the Continual Learning team led by Marc’Aurelio Ranzato, Amazon in Alex Smola’s team, and now at ServiveNow as a visiting researcher. I also had the privilege of contributing to ElementAI before its integration with ServiceNow.
At the heart of my research is the development of algorithms proficient in accumulating and transferring knowledge or skills to enhance generalization across varied tasks. My passion for data and computational efficiency has directed my studies into continual, transfer, and meta-learning, with a particular emphasis on applications spanning language, vision, and reinforcement learning.
Currently, at ServiceNow, I’m applying these principles by working with pre-trained Large Language Models (LLMs) to create computer task-solving agents. These agents, leveraging the vast knowledge and adaptability of LLMs, aim to navigate and master various computer tasks more efficiently. This work represents an exciting step towards enhancing the capabilities of automated solutions in different sectors.
Also, check out our software to unify continual-learning research, Sequoia, as well as my continual-learning wiki.
News
07/2023 Our new large-scale continual learning benchmark Nevis’22 was accepted at JMLR! Work done while interning at DeepMind.
05/2023 Our work on task-agnostic continual RL is accepted at CoLLAs 2023! Checkout the 5-min video summary.
12/2021 Our comparative study of large language models in continual learning is accepted at ICLR 2022!
09/2021 Our work digging into gradient sparsity for meta and continual learning, Sparse-MAML, is accepted at NeurIPS 2021!
09/2021 Our work solving the task-inference problem in compositonal continual learning, Local Modle Composition, is accepted at NeurIPS 2021!
07/2021 Our work introducing DiVE, a counterfactual explanation method that goes beyond generating trivial counterfactuals, is accepted at ICCV 2021!
09/2020 Our work proposing a new approach to continual learning (OSAKA) is accepted at NeurIPS 2020!
09/2020 Our work proposing a synthetic dataset generator (Synbols) to probe different learning algorithms is accepted at NeurIPS 2020!
08/2020 I gave a talk on our new approach to continual-learning evaluation towards real-life deployment of continual-learning systems (OSAKA) at ContinualAI
06/2020 I hosted a panel with Chelsea Finn, Chris Kanan and Subutai Ahmad at our Continual Learning workshop at CVPR 2020! You can find it here
06/2020 Our work on online continual compression lead by my brother is accepted ICML 2020! You can find a 18-min video here
12/2019 Our workshop on Continual Learning at CVPR2020 as been accepted! Watch out for our really cool competition.
12/2019 Our work on Language GANs Falling Short was accepted ICLR 2020! You can find a 5-min video summary here
09/2019 Our work on Online Continual learning with Maximal Interfered Retrieval is accepted NeurIPS 2019! You can find a 8-min video summary here