About me
I am a Senior Research Scientist at ServiceNow Research, specializing in post-training methods for computer-use agents. I see computer use as the ultimate playground for testing agents, thanks to its ubiquity and diversity. My research involves conducting large-scale empirical studies to systematically evaluate trade-offs among different approaches and to develop practical know-how, with reinforcement learning being a particular focus.
As a core contributor to the web-agent research library ecosystem, I actively shape evaluation frameworks (BrowserGym, WorkArena) and development platforms (AgentLab). My goal is to bridge foundational research and scalable tools to advance the field.
Previously, I completed my Ph.D. at the Quebec Artificial Intelligence Institute (Mila) under Professor Laurent Charlin. During my doctoral studies, I collaborated with DeepMind’s Continual Learning team led by Marc’Aurelio Ranzato, Amazon’s team under Alex Smola, and ElementAI prior to its integration with ServiceNow.
My Ph.D. research focused on building agents capable of accumulating and transferring knowledge across tasks, drawing from continual learning, transfer learning, and meta-learning. My work explored applications in language, vision, and reinforcement learning, emphasizing improvements in data and compute efficiency.
News
07/2023 Our new large-scale continual learning benchmark Nevis’22 was accepted at JMLR! Work done while interning at DeepMind.
05/2023 Our work on task-agnostic continual RL is accepted at CoLLAs 2023! Checkout the 5-min video summary.
12/2021 Our comparative study of large language models in continual learning is accepted at ICLR 2022!
09/2021 Our work digging into gradient sparsity for meta and continual learning, Sparse-MAML, is accepted at NeurIPS 2021!
09/2021 Our work solving the task-inference problem in compositonal continual learning, Local Modle Composition, is accepted at NeurIPS 2021!
07/2021 Our work introducing DiVE, a counterfactual explanation method that goes beyond generating trivial counterfactuals, is accepted at ICCV 2021!
09/2020 Our work proposing a new approach to continual learning (OSAKA) is accepted at NeurIPS 2020!
09/2020 Our work proposing a synthetic dataset generator (Synbols) to probe different learning algorithms is accepted at NeurIPS 2020!
08/2020 I gave a talk on our new approach to continual-learning evaluation towards real-life deployment of continual-learning systems (OSAKA) at ContinualAI
06/2020 I hosted a panel with Chelsea Finn, Chris Kanan and Subutai Ahmad at our Continual Learning workshop at CVPR 2020! You can find it here
06/2020 Our work on online continual compression lead by my brother is accepted ICML 2020! You can find a 18-min video here
12/2019 Our workshop on Continual Learning at CVPR2020 as been accepted! Watch out for our really cool competition.
12/2019 Our work on Language GANs Falling Short was accepted ICLR 2020! You can find a 5-min video summary here
09/2019 Our work on Online Continual learning with Maximal Interfered Retrieval is accepted NeurIPS 2019! You can find a 8-min video summary here