Comprehensive VR dataset for machine learning: Head- and eye-centred video and positional data
2024

Comprehensive VR Dataset for Machine Learning

Sample size: 14 publication Evidence: high

Author Information

Author(s): Kreß Alexander, Lappe Markus, Bremmer Frank

Primary Institution: Philipps University Marburg

Hypothesis

The dataset aims to enhance understanding of visual search behavior and navigation strategies in Virtual Reality environments.

Conclusion

The dataset provides valuable resources for training machine learning models focused on visual search and navigation in VR.

Supporting Evidence

  • The dataset includes over 10 hours of video data from six different VR environments.
  • Participants collected virtual coins in various landscapes, providing a rich context for studying navigation.
  • The data can be used to improve algorithms for gaze estimation and head movement tracking.
  • Annotated gaze data allows for detailed analysis of visual search behavior.

Takeaway

This study collected a lot of video and movement data from people exploring virtual worlds, which can help computers learn how we look for things.

Methodology

Participants navigated various VR environments while their eye movements and positions were recorded using a VR motion platform and eye-tracking headset.

Participant Demographics

14 participants, 7 males and 6 females, aged 20 to 30, all with normal or corrected-to-normal vision.

Digital Object Identifier (DOI)

10.1016/j.dib.2024.111187

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication