This is a visualization tool to annotate gaze data from Augmented Reality (AR) scenarios to perform AOI based eye-tracking analysis. The visualization tool consists of a gaze replay and timeline visualization - linked together to provide spatial and image-based annotation.
Please install Unity 2020.3.24 to open the project.
The Unity project contains two scenes (located in Assets/Scenes):
- TimelineVisualization
- GazeReplay
First, drag both scenes into the hierarchy window, then unload GazeReplay. In File/Build Settings, the order of the Scenes in Build should be as follows: TimelineVisualization 0, GazeReplay 1. When you start the active scene (TimelineVisualization), the GazeReplay scene will be loaded as well.
The project consists of an Assets, Packages and ProjectSettings folder. The Assets folder contains the necessary scripts, prefabs and data. Important scripts and data are listed below.
.
└── Scenes
|── TimelineVisualization # Unity scene for Timeline Visualization
|── GazeReplay # Unity scene for Gaze Replay
└── Scripts
|── AOI_Manager # the code to create AOI cubes in gaze replay
|── FileHandler # the code to save annotated gaze data in json file
|── Frame # code that contains fixation information
|── FrameAnnotator # the code to handle fixation annotation in timeline visualization
|── dataHandler # the code to extract gaze data from the csv files
└── Streaming Assets
|── frames # thumbnail images of the fixations for each participant
|── study_data # fixation data of each participant
|── RScript # R code to extract fixations from gaze data
This visualization simulates the movement and gaze data of participants. Fixations can be annotated by performing spatial annotation. For this, an AOI cube is placed in the room so all fixations in the selected region are labeled with a specific AOI.
Fixations are extracted from gaze data. In this visualization the fixations of the individual participants are shown, each fixation is represented by a thumbnail image. Individual fixations can be annotated by clicking on one or more thumbnails. When a thumbnail is selected, the focused region is mapped in the gaze replay to show the fixation region.
To conduct the pilot study, we used HoloLens2 - we created an AR scene in Unity and used the ARETT package to gather eye-tracking data. In parallel, we did a video recording with the HoloLens. After the study, we extracted fixations from gaze data using the ARETT-R package and created a thumbnail image for each fixation from the videos. During the study, we also generated a spatial mapping of the environment and created a photogrammetry mesh to simulate the room in gaze replay. To avoid copyright infringement, we replaced the mesh with a basic cube. For more information about the study, please read our paper.
Paper: Visual Gaze Labeling for Augmented Reality Studies
@inproceedings{oney2023visual,
title={Visual Gaze Labeling for Augmented Reality Studies},
author={{\"O}ney, Seyda and Pathmanathan, Nelusa and Becher, Michael and Sedlmair, Michael and Weiskopf, Daniel and Kurzhals, Kuno},
booktitle={Computer Graphics Forum},
volume={42},
number={3},
pages={373--384},
url={https://doi.org/10.1111/cgf.14837}
DOI={10.1111/cgf.14837}
year={2023},
organization={Wiley Online Library}
}