4D Radar: A Novel Sensing Paradigm for 3D Object Detection

lzy 378 0

3D object detection is able to obtain the position, size and orientation information of objects in 3D space, and is widely used in automatic driving perception, robot manipulation, and other applications. In 3D object detection, sensors such as LiDAR, RGB camera and depth camera are commonly used. In recent years, several works have been proposed to utilize 4D radar as a primary or secondary sensor to achieve 3D object detection. 4D radar, also known as 4D millimeter wave (mmWave) radar or 4D imaging radar. Compared to 3D radar, 4D radar not only obtains the distance, direction and relative velocity (Doppler velocity) of the target object, but also detects the height of the object. Due to its robustness against different weather conditions and lower cost, 4D radar is expected to replace low beam LiDAR in the future. This post summarizes the 4D radar based 3D object detection methods and datasets and will be continuously updated at https://github.com/liuzengyun/Awesome-3D-Detection-with-4D-Radar.

4D Radar: A Novel Sensing Paradigm for 3D Object Detection

Basic Knowledge

Different 4D Radar Data Representations

  • PC: Point Cloud
  • ADC: Analog-to-Digital Converter signal
  • RT: Radar Tensor

Datasets

Dataset Sensors Radar Data Source Annotations url Other
Astyx 4D Radar,LiDAR, Camera PC 19'EuRAD 3D bbox github paper ~500 frames
RADIal 4D Radar,LiDAR, Camera PC, ADC, RT 22'CVPR 2D bbox, seg github paper 8,252 labeled frames
View-of-Delft(VoD) 4D Radar,LiDAR, Stereo Camera PC 22'RA-L 3D bbox website 8,693 frames
TJ4DRadSet 4D Radar,LiDAR, Camera, GNSS PC 22'ITSC 3D bbox, TrackID github paper 7,757 frames
K-Radar 4D Radar,LiDAR, Stereo Camera, RTK-GPS RT 22'NeurIPS 3D bbox, TrackID github paper 35K frames; 360° Camera
Dual Radar dual 4D Radars,LiDAR, Camera PC 23'arXiv 3D bbox, TrackID paper 10K frames
L-RadSet 4D Radar,LiDAR, 3 Cameras PC 24'TIV 3D bbox, TrackID github paper 11.2K frames; Annos range to 220m

SOTA Papers

From 4D Radar Point Cloud

  1. RPFA-Net: a 4D RaDAR Pillar Feature Attention Network for 3D Object Detection (21'ITSC)
    • 🔗Link: paper code
    • 🏫Affiliation: Tsinghua University (Xinyu Zhang)
    • 📁Dataset: Astyx
    • 📖Note:
  2. Multi-class road user detection with 3+1D radar in the View-of-Delft dataset (22'RA-L)
    • 🔗Link: paper
    • 🏫Affiliation:
    • 📁Dataset: VoD
    • 📖Note: baseline of VoD
  3. SMURF: Spatial multi-representation fusion for 3D object detection with 4D imaging radar (23'TIV)
    • 🔗Link: paper
    • 🏫Affiliation: Beihang University (Bing Zhu)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  4. PillarDAN: Pillar-based Dual Attention Attention Network for 3D Object Detection with 4D RaDAR (23'ITSC)
    • 🔗Link: paper
    • 🏫Affiliation: Shanghai Jiao Tong University (Lin Yang)
    • 📁Dataset: Astyx
    • 📖Note:
  5. MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection (23'ICONIP)
    • 🔗Link: paper
    • 🏫Affiliation: Nanyang Technological University
    • 📁Dataset: Astyx, VoD
    • 📖Note:
  6. SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers (23'Sensors)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University
    • 📁Dataset: VoD
    • 📖Note:
  7. RadarPillars: Efficient Object Detection from 4D Radar Point Clouds (24'arXiv)
    • 🔗Link: paper
    • 🏫Affiliation: Mannheim University of Applied Sciences, Germany
    • 📁Dataset: VoD
    • 📖Note:

Fusion of 4D Radar & LiDAR

  1. InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection (22'IROS)
    • 🔗Link: paper
    • 🏫Affiliation: Tsinghua University (Li Wang)
    • 📁Dataset: Astyx
    • 📖Note:
  2. Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving (23'TVT)
    • 🔗Link: paper
    • 🏫Affiliation: Tsinghua University (Li Wang)
    • 📁Dataset: Astyx
    • 📖Note:
  3. L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (24'arXiv)
    • 🔗Link: paper
    • 🏫Affiliation: Xiamen University
    • 📁Dataset: VoD, K-Radar
    • 📖Note: For the K-Radar dataset, we preprocess the 4D radar spar setensor by selecting only the top 10240 points with high power measurement. This paper is submitted to 25'AAAI.
  4. Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal Feature Augmentation (24'ICRA)
    • 🔗Link: paper code
    • 🏫Affiliation: University of Edinburgh (Chris Xiaoxuan Lu)
    • 📁Dataset: VoD
    • 📖Note:

Fusion of 4D Radar & RGB Camera

  1. LXL: LiDAR Excluded Lean 3D Object DetectionWith 4D Imaging Radar and Camera Fusion (24'TIV)
    • 🔗Link: paper
    • 🏫Affiliation: Beihang University (Bing Zhu)
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:
  2. RCFusion: Fusing 4-D Radar and Camera With Bird’s-Eye View Features for 3-D Object Detection (23'TIM)
    • 🔗Link: paper
    • 🏫Affiliation: Tongji University
    • 📁Dataset: VoD, TJ4DRadSet
    • 📖Note:

Others

  1. Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions (24'CVPR)
    • 🔗Link: paper code
    • 🏫Affiliation: KAIST
    • 📁Dataset: K-Radar
    • 📖Note: This method takes LiDAR point cloud, 4D radar tensor (not point cloud) and image as input.

Representative researchers

  • Li Wang (Postdoctoral Fellow) and his co-leader Xinyu Zhang @Tsinghua University
  • Bing Zhu @Beihang University
  • Lin Yang @Shanghai Jiao Tong University
  • Chris Xiaoxuan Lu @University College London (UCL)

发表评论 取消回复
表情 图片 链接 代码

分享