Opening Up Open-World Tracking

Yang Liu1,*
Idil Esen Zulfikar2,*
Jonathon Luiten2,3,*
Achal Dave3,*
Deva Ramanan3
Bastian Leibe2
Aljoša Ošep1,3
Laura Leal-Taixé1
Technical University of Munich1
RWTH Aachen University2
Carnegie Mellon University3
* These authors contributed equally to this work.

CVPR 2022 (Oral)

CVPR 2022
TUM
RWTH
RWTH

[Paper] [Data] [Benchmark] [Baseline Code] [Evaluation Code]

Abstract



Tracking and detecting any object, including ones never-seen-before during model training, is a crucial but elusive capability of autonomous systems. An autonomous agent that is blind to never-seen-before objects poses a safety hazard when operating in the real world – and yet this is how almost all current systems work. One of the main obstacles towards advancing tracking any object is that this task is no-toriously difficult to evaluate. A benchmark that would allow us to perform an apples-to-apples comparison of existing efforts is a crucial first step towards advancing this important research field. This paper addresses this evaluation deficit and lays out the landscape and evaluation methodology for detecting and tracking both known and unknown objects in the open-world setting. We propose a new benchmark, TAO-OW: Tracking Any Object in an Open World, analyze existing efforts in multi-object tracking, and construct a baseline for this task while highlighting future challenges. We hope to open a new front in multi-object tracking research that will hopefully bring us a step closer to intelligent systems that can operate safely in the real world.


TAO-OW Benchmark


TAO-OW Benchmark. Class distribution of our TAO-OW benchmark (validation set), showing both the known classes for which training data is given, and the unknown classes which are evaluated as a proxy for the infinite variety (unknown unknowns) of objects which could appear in an open-world. Note the y-axis is log-scaled.


TAO-OW classes. Word cloud showing known (left) and unknown (right) classes in our TAO-OW benchmark, with word-size proportional to frequency.


Known
Unknown

Examples of known object categories (left) and unknown object categories (right).



Open-World Tracking Accuracy (OWTA)

We propose the OWTA (Open-World Tracking Accuracy) metric for Open-World Tracking, which is a generalization of the recently proposed HOTA metric for closed-world tracking. OWTA takes both detection recall (DetRe) and association accuracy (AssA) into account and combines them into a single score. $$ OWTA_{\alpha} = \sqrt{DetRe_{\alpha} \cdot AssA_{\alpha}} $$ where $$ DetRe_{\alpha} = \frac{|TP_{\alpha}|}{|TP_{\alpha}| + |FN_{\alpha}|} $$ $$ AssA_{\alpha} = \frac{1}{|TP_{\alpha}|} \sum_{c \in TP_{\alpha}} \frac{TPA_{\alpha}(c)}{TPA_{\alpha}(c) + FPA_{\alpha}(c) + FNA_{\alpha}(c)} $$



Open-World Tracking Baseline(OWTB)

OWTB Results on TAO-OW Val- and Test-set



TAO-OW val-set and TAO-OW test-set. Results of our final Open-World Tracking Baseline (OWTB) compared to previous SOTA trackers on TAO-OW val-set and TAO-OW test-set. *: Non open-world (trained on unknown classes), : contains overlapping results.


Overview Video



Paper and Code

Yang Liu* , Idil Esen Zulfikar* , Jonathon Luiten* , Achal Dave* , Deva Ramanan, Bastian Leibe, Aljoša Ošep, Laura Leal-Taixé.
Opening up Open-World Tracking
Proc. Computer Vision and Pattern Recognition (CVPR). 2022.
[Paper][Data] [Benchmark] [Baseline Code] [Evaluation Code]