Abstract

Teaser image

Panoptic tracking enables pixel-level scene interpretation of videos by integrating instance tracking in panoptic segmentation. This provides robots with a spatio-temporal understanding of the environment, an essential attribute for their operation in dynamic environments. In this paper, we propose a novel approach for panoptic tracking that simultaneously captures general semantic information and instance-specific appearance and motion features. Unlike existing methods that overlook dynamic scene attributes, our approach leverages both appearance and motion cues through dedicated network heads. These interconnected heads employ multi-scale deformable convolutions that reason about scene motion offsets with semantic context and motion-enhanced appearance features to learn tracking embeddings. Furthermore, we introduce a novel two-step fusion module that integrates the outputs from both heads by first matching instances from the current time step with propagated instances from previous time steps and subsequently refines associations using motion-enhanced appearance embeddings, improving robustness in challenging scenarios. Extensive evaluations of our proposed \netname model on two benchmark datasets demonstrate that it achieves state-of-the-art performance in panoptic tracking accuracy, surpassing prior methods in maintaining object identities over time.

Technical Approach

Overview of our approach
Figure: Our MAPT architecture is composed of a shared backbone and four interconnected heads for semantic segmentation, instance segmentation, and motion-based and appearance-based object tracking. Our motion head employs multi-scale deformable convolutions to capture rich semantic and motion features, while the appearance head focuses on learning instance-specific visual representations. These two heads complement each other, as motion cues help in scenarios where appearance alone is ambiguous, while appearance features provide stability when motion is unreliable. By combining the outputs with our fusion block, enhance the robustness of panoptic tracking in dynamic environments.

Code

A software implementation of this project based on PyTorch can be found in our GitHub repository for academic usage and is released under the GPLv3 license. For any commercial purpose, please contact the authors.

Publications

If you find our work useful, please consider citing our paper:

Elias Greve, Martin Büchner, Niclas Vödisch, Wolfram Burgard, and Abhinav Valada
Collaborative Dynamic 3D Scene Graphs for Automated Driving
Under review, 2023.

(PDF) (BibTeX)

Authors

Juana Valeria Hurtado

Juana Valeria Hurtado

University of Freiburg

Sajad Marvi

Sajad Marvi

University of Freiburg

 Rohit Mohan

Rohit Mohan

University of Freiburg

Abhinav Valada

Abhinav Valada

University of Freiburg

Acknowledgment

This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant number 468878300.