As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future. We introduce MERLOT, a model that learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech – in an entirely label-free, self-supervised manner. By pretraining with a mix of both frame-level (spatial) and video-level (temporal) objectives, our model not only learns to match images to temporally corresponding words, but also to contextualize what is happening globally over time. As a result, MERLOT exhibits strong out-of-the-box representations of temporal commonsense, and achieves state-ofthe-art performance on 12 different video QA datasets when finetuned. It also transfers well to the world of static images, allowing models to reason about the dynamic context behind visual scenes. On Visual Commonsense Reasoning, MERLOT answers questions correctly with 80.6% accuracy, outperforming state-of-the-art models of similar size by over 3%, even those that make heavy use of auxiliary supervised data (like object bounding boxes). Ablation analyses demonstrate the complementary importance of: 1) training on videos versus static images; 2) scaling the magnitude and diversity of the pretraining video corpus; and 3) using diverse objectives that encourage full-stack multimodal reasoning, from the recognition to cognition level.
|Title of host publication||Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021|
|Editors||Marc'Aurelio Ranzato, Alina Beygelzimer, Yann Dauphin, Percy S. Liang, Jenn Wortman Vaughan|
|Publisher||Neural information processing systems foundation|
|Number of pages||18|
|Publication status||Published - 2021|
|Event||35th Conference on Neural Information Processing Systems, NeurIPS 2021 - Virtual, Online|
Duration: 2021 Dec 6 → 2021 Dec 14
|Name||Advances in Neural Information Processing Systems|
|Conference||35th Conference on Neural Information Processing Systems, NeurIPS 2021|
|Period||21/12/6 → 21/12/14|
Bibliographical noteFunding Information:
We thank the anonymous reviewers for their helpful feedback that improved this work, along with Oren Etzioni and Gabriel Ilharco. Thanks also to Zak Stone and the Google Cloud TPU team for providing access to the TPU machines used for conducting experiments, and for help with the computing infrastructure. Last, but not least, thanks to all the YouTubers who share interesting videos with the world. This work was funded by DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI.
© 2021 Neural information processing systems foundation. All rights reserved.
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Signal Processing