Sparse activation maps for interpreting 3D object detection

Qiuxiao Chen, Pengfei Li, Meng Xu, Xiaojun Qi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

We propose a technique to generate "visual explanations"for interpretability of volumetric-based 3D object detection networks. Specifically, we use the average pooling of weights to produce a Sparse Activation Map (SAM) which highlights the important regions of the 3D point cloud data. The SAMs is applicable to any volumetric-based models (model agnostic) to provide intuitive intermediate results at different layers to understand the complex network structures. The SAMs at the 3D feature map layer and the 2D feature map layer help to understand the effectiveness of neurons to capture the object information. The SAMs at the classification layer for each object class helps to understand the true positives and false positives of each network. The experimental results on the KITTI dataset demonstrate the visual observations of the SAM match the detection results of three volumetric-based models.

Original languageEnglish
Title of host publicationProceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021
PublisherIEEE Computer Society
Pages76-84
Number of pages9
ISBN (Electronic)9781665448994
DOIs
StatePublished - Jun 2021
Event2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 - Virtual, Online, United States
Duration: 19 Jun 202125 Jun 2021

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

Conference2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021
Country/TerritoryUnited States
CityVirtual, Online
Period19/06/2125/06/21

Fingerprint

Dive into the research topics of 'Sparse activation maps for interpreting 3D object detection'. Together they form a unique fingerprint.

Cite this