Eleventh International Workshop on Egocentric Perception, Interaction and Computing
held in conjunction with the 3rd Ego4D Workshop
June 19th, 2023


Full Day Workshop - Date 19th June 2023

All dates are local to Vancouver's time, PST.

Workshop Location: Room West 111-112

Time Event Authors
Session 1 - Chairs: Giovanni Maria Farinella and Michael Wray
08:30-08:45 Welcome and Introductions
08:45-09:15 Keynote 1 - Andrea Vedaldi (Univeristy of Oxford, UK)
09:15-10:15 Ego4D Challenges - First Session 9:15 - 9.25 -- Opening presentation

9:25 - 9:45 -- Episodic Memory

9:45 - 10:00 -- AV & Social

10:00 - 10:15 -- Forecasting
10:15-10:45 Coffee Break
Session 2 - Chairs: David Crandall and Mike Shou
10:45-11:15 Keynote 2 - Hyun Soo Park (University of Minnesota, US)
11:15-12:00 Ego4D Challenges - Second Session In-person presentation:
GroundNLQ @ Ego4D Natural Language Queries Challenge 2023. Authors: Zhijian Hou (City University of Hong Kong), Lei Ji (Microsoft), Difei Gao (NUS), Wanjun Zhong (Sun Yat-Sen University) , Kun Yan (Beihang University), Chao Li (Microsoft), W. K. Chan (City University of Hong Kong), Chong-Wah Ngo (Singapore Management University), Mike Zheng Shou (National University of Singapore), Nan Duan (Microsoft Research)

STHG: Spatial-Temporal Heterogeneous Graph Learning for Advanced Audio-Visual Diarization. Authors: Kyle Min (Intel Labs)

QuAVF: Quality-aware Audio-Visual Fusion for Ego4D Talking to Me Challenge. Authors: Hsi-Che Lin (National Taiwan University), Chien-Yi Wang (NVIDIA) Min-Hung Chen (NVIDIA), Szu-Wei Fu (NVIDIA), Yu-Chiang Frank Wang (National Taiwan University)

Recorded presentation:
Single-Stage Visual Query Localization. Authors: Hanwen Jiang (University of Texas at Austin), Kristen Grauman (University of Texas at Austin & Meta AI)

EgoLoc: Revisiting 3D Object Localization from Egocentric Videos with Visual Queries. Authors: Jinjie Mai (KAUST), Abdullah Hamdi (KAUST), Silvio Giancola (KAUST), Chen Zhao (KAUST), Bernard Ghanem (KAUST)

Action Sensitivity Learning for the Ego4D Episodic Memory Challenge 2023. Authors: Jiayi Shao (Zhejiang University), Xiaohan Wang (Zhejiang University), Ruijie Quan (Zhejiang University), Yi Yang (Zhejiang University)

Palm: Predicting Actions through Language Models @ Ego4D Long-Term Action Anticipation Challenge 2023. Authors: Daoji Huang (ETH Zurich), Otmar Hilliges (ETH Zurich), Luc Van Gool (ETH Zurich), Xi Wang (ETH Zurich)

Prize ceremony: 11:50 - 12:00
12:00-12:30 Invited CVPR Papers - First Session
12:00-12:06 Paper 1: Egocentric Auditory Attention Localization in Conversations Authors: Fiona Ryan (Georgia Institute of Technology), Hao Jiang (Meta Reality Labs Research), Abhinav Shukla (Meta Reality Labs Research), James M. Rehg (Georgia Institute of Technology), Vamsi Krishna Ithapu (Meta Reality Labs Research)
12:06-12:12 Paper 2: Where is my Wallet? Modeling Object Proposal Sets for Egocentric Visual Query Localization Authors: Mengmeng Xu (KAUST), Yanghao Li (Meta AI), Cheng-Yang Fu (Meta AI), Bernard Ghanem (KAUST), Tao Xiang (Meta AI), Juan-Manuel Pérez-Rúa (Meta AI)
12:12-12:15 Question/Answering
12:15-12:21 Paper 3: Ego-Body Pose Estimation via Ego-Head Pose Estimation Authors: Jiaman Li (Stanford University), Karen Liu (Stanford University), Jiajun Wu (Stanford University)
12:21-12:27 Paper 4: Learning Video Representations from Large Language Models Authors: Yue Zhao (The University of Texas at Austin), Ishan Misra (FAIR, Meta AI), Philipp Krähenbühl (The University of Texas at Austin), Rohit Girdhar (FAIR, Meta AI)
12:27-12:30 Question/Answering
12:30-13:30 Lunch Break
Session Chairs: Dima Damen and David Fouhey
13:30-14:00 Keynote 3 - Suraj Nair (Stanford University, US)
14:00-14:45 EPIC Challenges - First Session
14:45-15:15 Accepted Abstracts - First Session In-person presentation:
FineBio: A Fine-Grained Video Dataset of Biological Experiments with Hierarchical Annotations. Authors: Takuma Yagi (National Institute of Advanced Industrial Science and Technology)*; Misaki Ohashi (The University of Tokyo); Yifei Huang (The University of Tokyo); Ryosuke Furuta (The University of Tokyo); Shungo Adachi (National Cancer Center Research Institute); Toutai Mitsuyama (National Institute of Advanced Industrial Science and Technology); Yoichi Sato (University of Tokyo)

StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation. Authors: Francesco Ragusa (University of Catania)*; Giovanni Maria Farinella (University of Catania); Antonino Furnari (University of Catania).

MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial-like Domain. Authors: Francesco Ragusa (University of Catania)*; Antonino Furnari (University of Catania); Giovanni Maria Farinella (University of Catania).

Recorded Presentations:
Prompting Large Language Models to Reformulate Queries for Moment Localization. Authors: Wenfeng Yan (Fudan University)*; Shaoxiang Chen (Fudan University); Zuxuan Wu (Fudan University); Yu-Gang Jiang (Fudan University). Recorded Presentation Link (YouTube).

An Overview of Challenges in Egocentric Text-Video Retrieval. Authors: Burak Satar (Nanyang Technological University)*; Hongyuan Zhu (Institute for Infocomm, Research Agency for Science, Technology and Research (A*STAR) Singapore); Hanwang Zhang (Nanyang Technological University); Joo-Hwee Lim (Institute for Infocomm Research). Recorded Presentation Link (YouTube).

15:15-15:45 Coffee Break
Session Chairs: Antonino Furnari and Michael Wray
15:45-16:15 Keynote 4 - David Fouhey (NYU/University of Michigan, US)
16:15-16:45 EPIC Challenges - Second Session
16:45-17:15 Aria Datasets and Challenges - Updates and Announcement
17:15-17:45 Accepted Abstracts - Second Session In-person presentation:
Enhancing Transformer Backbone for Egocentric Video Action Segmentation. Authors: Sakib Reza (Northeastern University)*; Balaji Sundareshan (Northeastern University); Mohsen Moghaddam (Northeastern University); Octavia Camps (Northeastern University). Abstract's ArXiv Link.

Situated Cameras, Situated Knowledges: Towards an Egocentric Epistemology for Computer Vision. Authors: Samuel P Goree (Indiana University)*; David Crandall (Indiana University).

Recorded Presentations:
Monitoring Parkinson's Disease Progression Through Egocentric Vision: A Precision Health Approach. Authors: Nevasini NA Sasikumar (PESU)*; Krishna Sri Ipsit Mantri (Indian Institute of Technology Bombay).

Human Action Recognition in Egocentric Perspective Using 2D Object and Hands Pose. Authors: Wiktor Mucha (Vienna University of Technology, Computer Vision Lab). Recorded Presentation Link (YouTube). Abstract's ArXiv Link.

17:45-18:15 Invited CVPR Papers - Second Session
17:45-17:51 Paper 5: ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation Authors: Zicong Fan (ETH Zürich, Switzerland), Omid Taheri (Max Planck Institute for Intelligent Systems, Tübingen, Germany), Dimitrios Tzionas (University of Amsterdam, the Netherlands), Muhammed Kocabas (Max Planck Institute for Intelligent Systems, Tübingen, Germany), Manuel Kaufmann (ETH Zürich, Switzerland), Michael J. Black (Max Planck Institute for Intelligent Systems, Tübingen, Germany), Otmar Hilliges (ETH Zürich, Switzerland)
17:51-17:57 Paper 6: Egocentric Audio-Visual Object Localization Authors: Chao Huang (University of Rochester), Yapeng Tian (University of Rochester), Anurag Kumar (Meta Reality Labs Research), Chenliang Xu (University of Rochester)
17:57-18:00 Question/Answering
18:00-18:06 Paper 7: Scene-aware Egocentric 3D Human Pose Estimation Authors: Jian Wang (Max Planck Institute for Informatics), Diogo Luvizon (Max Planck Institute for Informatics), Weipeng Xu (Meta Reality Labs), Lingjie Liu (Max Planck Institute for Informatics), Kripasindhu Sarkar (Google), Christian Theobalt (Max Planck Institute for Informatics)
18:06-18:12 Paper 8: AssemblyHands: Towards Egocentric Activity Understanding via 3D Hand Pose Estimation Authors: Takehiko Ohkawa (Meta Reality Labs; The University of Tokyo), Kun He (Meta Reality Labs), Fadime Sener (Meta Reality Labs), Tomas Hodan (Meta Reality Labs), Luan Tran (Meta Reality Labs), Cem Keskin (Meta Reality Labs).
18:12-18:15 Question/Answering
18:15-18:45 Future Plans and Closing Argument