Reflection-Aware Reasoning for
Non-Line-of-Sight Pedestrian Localization
Byeonggyu Park1, Mingu Jeon2, and Seong-Woo Kim1
1Seoul National University      2Pusan National University

In complex urban driving environments, critical safety hazards often arise from Non-Line-of-Sight (NLOS) regions obscured by buildings or walls. These occlusions significantly increase the risk of pedestrian collisions, as conventional sensors cannot see around corners. To address this, we propose a practical framework for NLOS pedestrian localization that robustly operates in real-world ego-dynamic environments with a moving ego-vehicle.

Abstract

Reliable localization of non-line-of-sight (NLOS) pedestrians is critical for safe urban autonomous driving, yet it remains highly challenging in real-world ego-dynamic environments, where ego-vehicle motion makes radar multipath propagation complex and noisy. In this paper, we present a practical reflection-aware framework for NLOS pedestrian localization with a moving ego-vehicle in real-world outdoor environments. Our framework fuses front-view camera images and 2D radar point clouds to infer reflection orders and reflective surface distributions in bird’s-eye-view space. It then uses physics-guided ray tracing to reconstruct distorted reflection paths and localize the hidden pedestrian. We validate the framework in real-world outdoor scenarios under ego-dynamic conditions. The results demonstrate robust NLOS pedestrian localization performance and provide the first real-world experimental validation of NLOS pedestrian localization with a moving ego-vehicle.

Introduction

NLOS Scenario Illustration

Illustration of NLOS pedestrian detection leveraging reflective wave propagation in urban alleys.

Perceiving pedestrians in NLOS regions is a critical yet formidable challenge for autonomous driving. In narrow urban environments, such as alleys or dense intersections, significant regions often fall outside the vehicle's direct Line-of-Sight (LOS). While these occluded areas make direct detection impossible, they can be indirectly observed by leveraging reflective wave propagation. By analyzing how signals bounce off surrounding structures, it is possible to obtain spatial evidence of hidden targets and infer their precise positions despite the absence of direct visibility.

Radar PCD Challenges

Technical challenges of radar PCD in ego-dynamic environments with multi-bounce reflections.

However, achieving reliable NLOS perception using mmWave radar Point Cloud Data (PCD) presents several technical hurdles. First, radar observations in ego-dynamic conditions are inherently sparse and frequently corrupted by environmental clutter and measurement noise. Second, because radar signals propagate through multi-bounce reflection paths (1st, 2nd, and 3rd order), the perceived objects appear spatially distorted from their actual locations. This complexity requires the system to accurately estimate reflective surfaces and reconstruct physically valid propagation trajectories to overcome structurally inconsistent noise and contextual ambiguity.

Key Contributions

Methodology

Overall Framework

Overall framework of the proposed reflection-aware NLOS localization method.

We develop a learning-based framework that performs feature-level fusion between front-view camera images (structural/semantic cues) and radar point clouds (range/reflection characteristics). Our model interprets complex reflection paths through key stages:

Dataset

We evaluate our method on a real-world multimodal dataset collected in a dedicated outdoor testbed designed to emulate narrow urban roads with T-junctions and strong occlusion regions. The environment spans approximately 53.5 m × 33.5 m with road widths of about 5.5 m, enabling realistic NLOS pedestrian scenarios.

Dataset environment and sensor platform

Data acquisition vehicle and outdoor testbed environment used for multimodal dataset collection.

Dataset Summary

The ego-vehicle was equipped with a 77 GHz automotive mmWave radar, a front-view fisheye camera, a 128-channel LiDAR, and wheel encoders, all synchronized at 10 Hz. The dataset covers both ego-static cases, where the vehicle is stopped, and ego-dynamic cases, where the vehicle moves through narrow urban roads under realistic low-speed driving conditions.

Dataset distribution

Distribution of ego-vehicle speed in ego-dynamic scenarios and the number of pedestrians in the dataset.

In ego-dynamic scenarios, the vehicle speed was measured using wheel encoders. Considering only dynamic frames with speeds of at least 5 km/h, the average speed is 10.78 km/h with a standard deviation of 3.06 km/h, and the maximum speed reaches 22.0 km/h. This setup allows us to evaluate NLOS pedestrian localization under practical outdoor driving conditions with realistic motion and occlusion patterns.

Experimental Results

Our framework demonstrates reliable performance in diverse urban driving conditions. By effectively reasoning through complex multipath reflections, the system accurately localizes hidden pedestrians in both stationary and moving ego-vehicle scenarios.

Ego-static

Ego-static Scenario

Ego-dynamic

Ego-dynamic Scenario

Qualitative results: The green bounding boxes represent our model's predictions, showing high alignment with the red ground-truth even under dynamic ego-motion. These results illustrate the proposed reflection-aware NLOS localization framework's effectiveness under both ego-static and ego-dynamic settings.

Inside the Reflection-Aware Pipeline

Pipeline process

The visualization above shows the intermediate stages of our pipeline: point segmentation, reflective surface estimation, ray tracing, and final localization. The final column demonstrates the high precision of our predicted pedestrian locations compared to the ground-truth by reconstructing the full physics-based reflection paths.

Our Research Evolution in NLOS Perception

This project builds upon our group's extensive research history in NLOS perception. We have explored various modalities including radar and acoustics to solve hidden object localization:

Acknowledgement

This work was supported by Seoul National University. This paper was supported by the Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE). We would also like to express our sincere gratitude to Seoul National University for their support in providing experimental equipment and constructing the testbed environment for data collection. In addition, we sincerely thank Hanbi Baek, Heeyeun Kim, Min-Taek Oh, and Keonwoo Kim for their valuable assistance in the dataset construction process. Their support and contributions are greatly appreciated.