- Reconstruction of general dynamic scenes is motivated by potential applications in film and broadcast production together with the ultimate goal of automatic understanding of real-world scenes from distributed camera networks. With recent advances in hardware and the advent of virtual and augmented reality, dynamic scene reconstruction is being applied to more complex scenes with applications in Entertainment, Games, Film, Creative Industries and AR/VR/MR. We welcome contributions to this workshop in the form of oral presentations and posters. Suggested topics include, but are not limited to:
- Dynamic 3D reconstruction from single, stereo or multiple views
- Learning-based methods in dynamic scene reconstruction and understanding
- Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 video, light fields)
- 4D reconstruction and modelling
- 3D/4D data acquisition, representation, compression and transmission
- Scene analysis and understanding in 2D and 3D
- Structure from motion, camera calibration and pose estimation
- Digital humans: motion and performance capture, bodies, faces, hands
- Geometry processing
- Computational photography
- Appearance and reflectance modelling
- Scene modelling in the wild, moving cameras, handheld cameras
- Applications of dynamic scene reconstruction (VR/AR, character animation, free-viewpoint video, relighting, medical imaging, creative content production, animal tracking, HCI, sports)
2. 3rd Workshop and Challenge on Learned Image Compression (9:00 AM-5:00 PM)
3. 🍃 CLVISION 1st Workshop on Continual Learning in Computer Vision (9:00 AM-5:00 PM)
- We solicit paper submissions on novel methods and application scenarios of continual learning. We accept papers on a variety of topics, including liefelong learning, few-shot learning, metalearning, incremental learning, online learning, multitask learning, etc..
- Razvan Pascanu (Deep Mind)
- Chelsea Finn (Berkeley) 🦄
- Cordelia Schmid (INRIA)
- Davide Maltoni (U of Boloogna)
- Christopher Kanan(PAIGE, RIT & CornellTech)
- Gemma Roig (SUTD & MIT) 🐇
- Subutai Ahmad (Numenta)
4. Deep Declarative Networks (9:00 AM-5:00 PM)
- Conventional deep learning architectures involve composition of simple feedforward processing functions that are explicitly defined. Recently, researchers have been exploring deep learning models with implicitly defined components. To distinguish these from conventional deep learning models we call them deep declarative networks, borrowing nomenclature from the programming languages community (Gould et al., 2019). This workshop explores the advantages (and potential shortcomings) of declarative networks and their variants, bringing ideas developed in different contexts under the umbrella of deep declarative networks. We will discuss technical issues that need to be overcome in developing such models and applications of these models to computer vision problems that show benefit over conventional approaches. Topics include:
- Declarative end-to-end learnable processing nodes
- Differentiable constrained and unconstrained (non-convex) optimization problems
- Differentiable convex optimization problems
- Imposing hard constraints in deep learning models
- Back-propagation through physical models
- Applications of the above to problems in computer vision such as differentiable rendering, differentiable 3d models, reinforcement learning, action recognition, meta-learning, etc
- Brandon Amos (FAIR)
- Ricky Tian Qi Chen (UoT)
- Chelsea Finn (Stanford)
- Pascal Fua (EPFL)
- Subhransu Maji (UMass Amherst)
- Zico Kolter (CMU)
5. 🍃 The 2nd Learning from Imperfect Data (LID) Workshop - Weakly Supervised Learning for Real-World (9:00 AM-5:00 PM)
Not decided yet.
6. 16th IEEE CVPR Workshop on Perception Beyond the Visible Spectrum (9:00 AM-5:00 PM)
7. 🍃 3D HUMANS 2020 3rd International Workshop on HUman pose, Motion, Activities aNd Shape in 3D (9:00 AM-5:00 PM)
Not decided yet.
8. 🍃 Deep Learning for Geometric Computing Workshop (9:00 AM-5:00 PM)
- This workshop aims to bring together researchers from computer vision, computer graphics, and mathematics to advance the state of the art in topological and geometric shape analysis using deep learning.
- Boundary extraction from 2D/3D shapes
- Geometric deep learning on 3D and higher dimensions
- Generative methods for parametric representations
- Novel shape descriptors and embeddings for geometric deep learning
- Deep learning on non-Euclidean geometries
- Transformation invariant shape abstractions
- Shape abstraction in different domains
- Synthetic data generation for data augmentation in geometric deep learning
- Comparison of shape representations for efficient deep learning
- Novel kernels and architectures specifically for 3D generative models
- Eigen-spectra analysis and graph-based approaches for 3D data
- Applications of geometric deep learning in different domains
- Learning-based estimation of shape differential quantities
- Detection of geometric feature lines from 3D data, including 3D point clouds and depth images
- Geometric shape segmentation, including patch decomposition and sharp lines detection
9. 🍃 DIRA Workshop and Challenge Diagram Image Retrieval and Analysis (DIRA): Representation, Learning, and Similarity Metrics (9:00 AM-5:00 PM)
- William T. Freeman (MIT)
- Devi Parikh (Georgia Tech/FAIR)
- Timothy Hospedales (U of Edinburgh)
- Adriana Kovashka (Pittsburgh)
- Rogerio Feris (IBM Watson)
- Qiuhong Ke (U of Melbourne)
- Lingfei Wu (IBM Watson)
- Ranjay Krishna (Stanford)
10. EarthVision: Large Scale Computer Vision for Remote Sensing Imagery (9:00 AM-5:00 PM)
11. 🍃 Embodied AI (9:00 AM-5:00 PM)
- There is an emerging paradigm shift from ‘Internet AI’ towards ‘Embodied AI’ in the computer vision, NLP, and broader AI communities. In contrast to Internet AI’s focus on learning from datasets of images, videos, and text curated from the internet, embodied AI enables learning through interaction with the surrounding environment.
Not provided yet.
12. International Challenge on Activity Recognition (ActivityNet) (9:00 AM-5:00 PM)
Not decide yet.
13. International Workshop and Challenge on Computer Vision for Physiological Measurement (9:00 AM-5:00 PM)
14. Joint workshop on Long Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM (9:00 AM-5:00 PM)
15. 🍃 Learning 3D Generative Models (9:00 AM-5:00 PM)
- Thus far, the vision community's attention has mostly focused on generative models of 2D images. However, in computer graphics, there has been a recent surge of activity in generative models of three-dimensional content: learnable models which can synthesize novel 3D objects, or even larger scenes composed of multiple objects. As the vision community turns from passive internet-images based vision toward more embodied vision tasks, these kinds of 3D generative models become increasingly important: as unsupervised feature learners, as training data synthesizers, as a platform to study 3D representations for 3D vision tasks, and as a way of equipping an embodied agent with a 3D `imagination' about the kinds of objects and scenes it might encounter.
- Generative models for 3D shape and 3D scene synthesis
- Generating 3D shapes and scenes from real world data (images, videos, or scans)
- Representations for 3D shapes and scenes
- Unsupervised feature learning for embodied vision tasks via 3D generative models
- Training data synthesis/augmentation for embodied vision tasks via 3D generative models
- Jitendra Malik (UCB)
- lEONIDAS J. Guibas (Stanford)
- Sanja Fidler (UoT/NVIDIA)
- Daniel G. Aliaga (Purdue)
- Evangelos Kalogerakis (U of Massachusetts Amherst)
- Jiajun Wu (Google Research/Stanford) 🐈
- Vladimir Kim (Adobe Research)
- Matt Fisher (Adobe Research)
- Georgia Gkioxari (FAIR)
- Paul Guerreri (Adobe Research)
16. 2nd Workshop on Safe Artificial Intelligence for Automated Driving (9:00 AM-5:00 PM)
17. The 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture (9:00 AM-5:00 PM)
18. Vision for all Seasons: Adverse Weather and Lighting Conditions (9:00 AM-5:00 PM)
19. Visual Question Answering and Dialog (9:00 AM-5:00 PM)
- Danna Gurari (UT Austin)
- Felix Hill (DeepMind)
- Douwe Kiela (FAIR)
- Anna Rohrbach (UCB)
- Mateusz Malinowski (DeepMind)
- Amanpreet Singh (FAIR)
- Nassim Parvin (Georgia Tech)
- Ani Kembhavi (UW)
- Jiasen Lu (Georgia Tech)
- Dimosthenis Karatzas (U Autónoma de Barcelona)
Not decided yet.
1. A Comprehensive Tutorial on Video Modeling (9:00 AM-1:00 PM)
2. Neuro-Symbolic Visual Reasoning and Program Synthesis (9:00 AM-1:00 PM)
- Jiayuan Mao (MIT)
- Kevin M Ellis (MIT)
- Chuang Gan (MIT-Watson AI Lab)
- Jiajun Wu (Stanford University)
- Danny Gutfreund (IBM)
- Joshua Tenenbaum (MIT)✨Department of Brain and Cognition Science @MIT
3. Deep learning and drone vision (9:00 AM-1:15 PM)
- Ioannis Pitas
4. All you need to know about self driving (9:00 AM-5:00 PM)
5. RANSAC in 2020 (9:00 AM-5:00 PM)
- Dániel Baráth (MTA SZTAKI, CMP Prague)
- Jiri Matas (CMP CTU FEE)
- Dmytro Mishkin (Czech Technical University in Prague)
- Ondrej Chum (Vision Recognition Group, Czech Technical, University in Prague)
- Tat-Jun Chin (University of Adelaide)
- Rene Ranftl (Intel Labs)
- Sean Fanello (Google)
- Christoph Rhemann (Google)
- Graham Fyffe (Google Inc.)
- Jonathan Taylor (Google Inc.)
- Sofien Bouaziz (Google)
- Adarsh Kowdle (Google)
- Rohit Pandey (Google)
- Sergio Orts-Escolano (Google)
- Paul E Debevec (Google VR)
- Shahram Izadi (Google)
7. Novel View Synthesis: from Depth-Based Warping to Multi-Plane Images (9:00 AM-5:00 PM)
- Orazio Gallo (NVIDIA Research)
- Alejandro Troccoli (NVIDIA)
- Varun Jampani (Google)
8. Learning Representations via Graph-structured Networks (1:15 PM-5:00 PM)
- Xiaolong Wang (UC Berkeley)
- Sifei Liu (NVIDIA)
- Saining Xie (FAIR)
- Shubham Tulsiani (FAIR)
- Chen Sun (Google)
- Han Hu (MSRA)
- Jan Kautz (NVIDIA)
- Ming-Hsuan Yang (U of California at Merced)
- Abhinav Gupta (CMU/FAIR)
- Trevor Darrell (UCB)
9. How to write a good review? (1:15 PM-5:00 PM)
- Laura Leal-Taixé (TUM)
- Torsten Sattler (Chalmers University of Technology)
10. Cycle Consistency and Synchronization in Computer Vision (1:15 PM-5:00 PM)
- Tolga Birdal (TU Munich)
- Qixing Huang (The University of Texas at Austin)
- Federica Arrigoni (Czech Technical University in Prague)
- Leonidas Guibas (Stanford University
- Marc Pollefeys (ETH)
- Sanja Fidler (UoT)
- Thomas Funkhouser (Princeton)
- Shubham Tulsiani (FAIR)
- Hao Su (UCSD)
- Vladlen Koltun (Intel)
- Jitendra Malik (UCB)
- Matthias Nießner (TUM)
- Jiajun Wu (MIT)
7. 🍃 AI for Content Creation - Lovely ❤
- Maneesh Agrawala (Stanford)
- Ifran Essa (George Tech)
- Sanja Fidler (UoT/NVIDIA)
- Phillip Isola (MIT)
- Angjoo Kanazawa (UCB)
- Ming-Hsuan Yang
- Ying Cao
- Tabitha Yong
- Aaron Hertzmann (Adobe)
- Justin Johnson (U Mich)
- Generative models for image/video synthesis
- Image/video editing
- Image/video inpainting
- Image/video extrapolation
- Image/video translation
- Style transfer
- Text-to-image creation
- Image and video for enthusiast, VFX, architecture advertisements, art, ...
- 2D/3D graphic design
- Text and typefaces
- Design for documents, Web
- Fashion, garments, and outfits
- Novel applications and datasets
- Jitendra Malik (UCB)
- Aude Oliva (MIT) 🦄
- Abhinav Gupta (CMU)
- Chelsea Finn (Stanford)
- Animesh Garg (UoT)
- Angjoo Kanazawa (UCB)
Not decide yet
12. Joint workshop on Long Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM
- Alison Gopnik (Berkeley)
- Jitendra Malik (Berkeley)
- Aude Oliva (MIT)
- Elizabeth Spelke (Harvard)
- Joshua Tenenbaum (MIT)
- Daniel Yurovsky (CMU)
- Larry Zitnick (FAIR)
- Taco Cohen (U of Amsterdam/Qualcomm) 🐈
- Kristen Graumann (UT Austin) 🦄
- Jean-Francois Lalonde (Universite Laval)
- Matthias Niessner (Technical University of Munich)
- Tomas Pajdla (Czech Technical University-Prague)
- Davide Scaramuzza (ETH)
- DreamVu (Rajat Aggarwal, CEO)
- Facebook (Albert Parra Pozo, Research Scientist)
- Ricoh Company (Hirochika Fujiki)
- Valeo (Patrick Denny, Senior Expert)
- Wormpex AI (Gang Hua, VP and Chief Scientist)
- Zillow Group (Sing Bing Kang, Distinguished Scientist)
- Raquel Urtasun (Uber ATG/UoT)
- Paul Newman (Oxbotica & University of Oxford)
- Andrej Karpathy (Tesla)
- Alex Kendall (Wayve/Cambridge)
- Kris Kitani (CMU)
- Philip Torr (Oxford)
- Nic Lane (Oxford)
- Diana Marculescue (UT Austin)
- Song Han (MIT)
- Bill Dally (Stanford)
- Chelsea Finn (Stanford)
1. 3rd Tutorial on Interpretable Machine Learning for Computer Vision (9:00 AM-1:00 PM)
- Bolei Zhou (CUHK)
2. Learning and understanding single image depth estimation in the wild (9:00 AM-1:00 PM)
- Matteo Poggi (University of Bologna)
- Fabio Tosi (University of Bologna)
- Filippo Aleotti (University of Bologna)
- Stefano Mattoccia (University of Bologna)
- Clement LJC Godard (University College London)
- Michael Firman (Niantic)
- Jamie Watson (Niantic)
- Gabriel Brostow (University College London)
3. Local Features: From SIFT to Differentiable (9:00 AM-1:00 PM)
- Vassileios Balntas (Scape Technologies)
- Dmytro Mishkin (Czech Technical University in Prague)
- Edgar Riba (CVC)
4. Zeroth Order Optimization: Theory and Applications to Deep Learning (9:00 AM-1:00 PM)
- Saining Xie (FAIR)
- Kirillov Alexander (FAIR)
- Ross Girshick (FAIR)
- Kaiming He (FAIR)
- Justin Johnson (University of Michigan)
- Georgia Gkioxari (Facebook)
- Christoph Feichtenhofer (FAIR)
- Haoqi Fan (FAIR)
- Yuxin Wu (FAIR)
- Wan-Yen Lo (FAIR)
- Piotr Dollar (FAIR)
- Ayush Tewari (Max Planck Institute for Informatics)
- Ohad Fried (Stanford)
- Justus Thies (Technical University of Munich)
- Vincent Sitzmann (Stanford University)
- Stephen Lombardi (Facebook Reality Labs)
- Kalyan Sunkavalli (Adobe Research)
- Ricardo Martin-Brualla (Google)
- Tomas Simon (Facebook)
- Jason Saragih (Oculus)
- Matthias Niessner (Technical University of Munich)
- Rohit Pandey (Google)
- Sean Fanello (Google)
- Gordon Wetzstein (Stanford University)
- Jun-Yan Zhu (Adobe Inc.)
- Christian Theobalt (MPI Informatik)
- Maneesh Agrawala (Stanford)
- Eli Shechtman (Adobe Research, US)
- Dan B Goldman (Google, Inc.)
- Michael Zollhöfer (Facebook Reality Labs)
- Zhe Gan (Microsoft)
- Licheng Yu (Microsoft)
- Yu Cheng (Microsoft)
- Jingjing Liu (Microsoft)
- Xiaodong He (JD AI Research)
- Hang Zhang (Amazon Inc)
- Matthias Seeger (Amazon)
- Mu Li (Amazon)
- Claudio Ferrari (U of Florence)
- Stefano Berretti (U of Florence)
- Alberto Del Bimbo (U of Florence)
- Kristen Grauman (UT Austin)
- Larry Davis (U of Maryland)
- Devi Parikh (George Tech)
- Adriana I. Kovashka (Pittsburgh)
- Hui Wu (IBM Research)
- Extreme classification is a rapidly growing research area in computer vision focussing on multi-class and multi-label problems involving an extremely large number of labels (ranging from thousands to billions). Many applications of extreme classification have been found in diverse areas including recognizing faces, retail products and landmarks; image and video tagging; etc. Extreme classification reformulations have led to significant gains over traditional ranking and recommendation techniques for both machine learning and computer vision applications leading to their deployment in several popular products used by millions of people worldwide. This has come about due to recent key advances in modeling structural relations among labels, the development of sub-linear time algorithms for training and inference, the development of appropriate loss-functions which are unbiased with respect to missing labels and provide greater rewards for the accurate prediction of rare labels, etc. Extreme classification raises a number of interesting research questions including but not limited to:
Large-scale fine-grained
classification and embeddingsCross-modality
modeling of visual and label spaces- Distributed and parallel learning in extremely large output spaces
- Learning from highly
imbalanced data
- Dealing with
tail labels
and learning from very few data points per label Zero shot learning
and extensible output spacesTransfer learning
and domain adaptationModeling structural relations
among labels- Structured output prediction and
multi-task learning
- Log-time and log-space training and prediction and prediction on a test-time budget
- Statistical analysis and generalization bounds
- Applications to new domains
- Trevor Darrel (Berkeley)
- Jia Deng (Princeton)
- Dhruv Mahajan (Facebook)
- Deva Ramanan (CMU & Argo)
- Chuck Rosenberg (Pinterest)
- Olga Russakovsky (Princeton)
- Traditional, keypoint-based formulations for image matching are still very competitive on tasks such as Structure from Motion (SfM), despite recent efforts on tackling pose estimation with dense networks. In practice, many state-of-the-art pipelines still rely on methods that stood the test of time, such as SIFT or RANSAC.
- In this workshop, we aim to encourage novel strategies for image matching that deviate from and advance traditional formulations,
with a focus on large-scale, wide-baseline matching for 3D reconstruction or pose estimation
. This can be achieved by applying new technologies to sparse feature matching, or doing away with keypoints and descriptors entirely.The workshop topics include (but are not limited to):
* Reformulating keypoint extraction and matching pipelines with deep networks. * Applying geometric constraints into the training of (sparse or dense) deep networks. * Leveraging additional cues such as semantics. * Developing adversarial methods to deal with conditions where current methods fail (weather changes, day versus night, etc.). * Exploring attention mechanisms to match salient image regions. * Integrating differentiable components into 3D reconstruction frameworks. * Matching across different data modalities such as aerial versus ground.
- Alyosha Efros (UCB) 🐈
- Ivan Laptev (INRIA)
- Jitendra Malik (UCB/FAIR)
- Ming-Yu Liu (NVIDIA)
- Pierre Sermanet (Google Research)
8. 5th International Workshop on Differential Geometry in Computer Vision and Machine Learning: Diff-CVML
- Traditional machine learning, pattern recognition and data analysis methods often assume that input data can be represented well by elements of Euclidean space. While this assumption has worked well for many past applications, researchers have increasingly realized that most data in vision and pattern recognition is intrinsically non-Euclidean, i.e. standard Euclidean calculus does not apply. The exploitation of this geometrical information can lead to more accurate representation the inherent structure of the data, better algorithms and better performance in practical applications. In particular, Riemannian geometric principles can be applied to a variety of difficult computer vision problems including face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion, to name a few. Consequently, Riemannian geometric computing has become increasingly popular in the computer vision community. Besides nice mathematical formulations, Riemannian computations based on the geometry of underlying manifolds are often faster and more stable than their classical, Euclidean counterparts.
- Mehrsan Javan (CTO)
- Amos Bercovich (WSC Sports)
- Rainer Lienhart (U of Augsburg)
- John R. Smith (IBM Research)
- This workshop will emphasize future directions beyond supervised learning such as reinforcement learning and weakly supervised learning. Such approaches require far less supervision and allow computers to learn beyond mimicking what is explicitly encoded in a large-scale set of annotations. We are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:
* Learning with limited data, trends and training strategies * Unsupervised learning * Transfer learning and domain transfer * Large scale image and video understanding * Reinforcement learning * Unsupervised feature learning and feature selection * Mid-level representations with deep learning * Advancements in deep learning * Domain transfer using deep architectures * Real time learning applications * Lifelong learning
- Yann LeCun (FAIR/NYC) 🐺
- Joaquin Quiñonero Candela (FAIR)
- Alan Yuille (JHU)
- Thomas G. Dietterich (Oregon State University)
- Aleksander Madry (MIT)
- Matthias Bethge (Tubingen)
- Laurens van der Maaten (FAIR)
- Been Kim (Google Brain) 🦄
- Cho-Jui Hsieh (UCLA)
- Pin-Yu Chen (IBM)
- Earlence Fernandes (UW-Madison)
- Boqing Gong (Google)
- Raquel Urtasun (UoT/Uber)
- Andreas Geiger (MPI)
- Bo Li (UIUC)
- John Leonard (MIT)
- Deva Ramanan (CMU)
- Emilio Frazzoli (ETH)
- Trevor Darrell (UCB)
- Byron Boots (UW)
- Andrej Karpathy (Tesla)
- Daniel Cremers (TUM)
- Andreas Wendel (Kodiak Robotics)
- Dengxin Dai (ETH)
- Bernt Schiele (MPI)
- Devi Parikh (George Tech)
- Andre Barbu (MIT)
- Theoretical frameworks and novel objective functions for representation learning. E.g., What are the principles and mathematically-sound foundations that enables learning interpretable, universal and parsimonious representations for pattern analysis and pattern synthesis? How to integrate top-down knowledge-oriented losses and bottom-up data-drive losses? How to define interpretability-sensitive loss functions to learn interpretable models from scratch? How to define self-supervised loss functions to enable universal pre-training, domain transfer and life-long learning?
- Novel network architectures and training protocols E.g., How to tackle the combinatorial and exponentially large space of network architectures? What are the inductive biases that facilitate definitions of compact yet sufficiently expressive sub-spaces? How to search the space in an effective and efficient way? How to learn to grow in the space for life-long learning? How to integrate supervised, unsupervised and active learning in training and adaptation to leverage unlabeled data in an elegant way?
- Adaptive multi-task and transfer learning
- Multi-objective optimization and parameter estimation methods
- Reproducibility in neural architecture search
- Resource constrained architecture search
- Automatic data augmentation and hyperparameter optimization
- Unsupervised learning, domain transfer and life-long learning
- Computer vision datasets and benchmarks for neural architecture search
- Jitendra Malik (UCB)
- Song-Chun Zhu (UCLA)
- Piotr Dollar (FAIR)
- Michael S. Ryoo (Google Brain/Stony Brook)
- Raquel Urtasun (Uber ATG/UoT)
- Song Han (MIT)
- David Xianfeng Gu (Stony Brook)
- Alan Yullie (JHU)
19. The 3rd Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+)
- Leonidas J. Guibas (Stanford)
- Jittendra Malik (UCB)
- Denver Dash
23. Catch UAVs that Want to Watch You: Detection and Tracking of Unmanned Aerial Vehicle (UAV) in the Wild and the 1st Anti-UAV Challenge
- Xiang Ma (Amazon)
- Timnit Gebru (Google)
- Yusuke Matsui (The University of Tokyo)
- Takuma Yamaguchi (Mercari Inc.)
- Zheng Wang (National Institute of Informatics)
- Ashwin Swaminathan (Magic Leap, Inc)
- Prateek Singhal (Magic Leap, Inc)
- David Molyneaux (Magic leap, Inc)
- Frank Steinbruecker (Magic Leap, Inc)
- Ali Shaw-Rockney (Magic Leap, Inc)
- Siddharth Choudhary (Magic Leap, Inc)
- Lomesh Agarwal (Magic Leap, Inc)
- Vitaliy Kurlin (University of Liverpool)
- Vivek Sharma (MIT, KIT)
- Ali Diba (KU Leuven)
- Luc Van Gool (ETH Zurich)
- Manohar Paluri (Facebook)
- Jürgen Gall (University of Bonn)
- Mohsen Fayyaz (University of Bonn)
- Jason Dai (Intel)
8. Towards Annotation-Efficient Learning: Few-Shot, Self-Supervised, and Incremental Learning Approaches
- Spyros Gidaris (valeo ai)
- Karteek Alahari (Inria)
- Andrei Bursuc (valeo ai)
- Relja Arandjelović (DeepMind)
- Achuta Kadambi (UCLA)
- William T. Freeman (MIT)
- Laura Waller (UC Berkeley)
- Katerina Fragkiadaki (Carnegie Mellon University)
- Ayan Chakrabarti (Washington University in St. Louis)
- Wenjin Wang (Philips)
- Gerard de Haan (TU/e)
- Shiwen Mao (Auburn University)
- Xuyu Wang (California State University, Sacramento)
- Mingmin Zhao (MIT)
- Marcelo Bertalmío (Universitat Pompeu Fabra)