InCrowd-VI: A Realistic Visual–Inertial Dataset for Evaluating Simultaneous Localization and Mapping in Indoor Pedestrian-Rich Spaces for Human Navigation
2024

InCrowd-VI: A Dataset for Indoor Navigation in Crowded Spaces

Sample size: 58 publication 10 minutes Evidence: high

Author Information

Author(s): Bamdad Marziyeh, Hutter Hans-Peter, Darvishy Alireza, Lázaro-Galilea José Luis

Primary Institution: Institute of Computer Science, Zurich University of Applied Sciences

Hypothesis

The lack of realistic datasets limits the development of robust SLAM solutions for navigating crowded indoor spaces.

Conclusion

The InCrowd-VI dataset reveals significant performance limitations in state-of-the-art SLAM algorithms when tested in realistic crowded scenarios.

Supporting Evidence

  • The dataset includes 58 sequences with a total trajectory length of 4998.17 m and a recording time of 1 h, 26 min, and 37 s.
  • Ground-truth trajectories are accurate to approximately 2 cm.
  • State-of-the-art SLAM algorithms showed severe performance limitations in crowded scenarios.
  • Deep learning-based approaches maintained high pose estimation coverage but failed to achieve real-time processing speeds.
  • The dataset captures challenges such as pedestrian occlusions, varying crowd densities, and complex layouts.

Takeaway

This study created a new dataset to help robots and systems navigate crowded indoor spaces, especially for people who can't see well. It shows that current technology struggles in these busy environments.

Methodology

The dataset was collected using Meta Aria Project glasses in various indoor environments, capturing RGB images, stereo images, and IMU measurements.

Limitations

The dataset lacks depth information and focuses solely on indoor environments, limiting its applicability for outdoor navigation.

Participant Demographics

The dataset captures realistic human motion patterns from visually impaired individuals navigating crowded spaces.

Digital Object Identifier (DOI)

10.3390/s24248164

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication