Detection of Strong and Weak Moments in Cinematic Virtual Reality Narration with the Use of 3D Eye Tracking

Paweł Kobyliński, Grzegorz Pochwatko

2020 W: ACHI 2020 : The Thirteenth International Conference on Advances in Computer-Human Interactions / Jaime Lloret Mauri, Diana Saplacan, Klaudia Çarçani, Prima Oky Dicky Ardiansyah, Simona Vasilache; Valencia : IARIA, s. 280-284

International Conference on Advances in Computer-Human Interactions ACHI 2020, Valencia, 2020-11-21 - 2020-11-25

Cinematic Virtual Reality (CVR) is a medium growing in popularity among both filmmakers and researchers. The medium brings challenges for movie and video makers, who need to narrate in a different way than in traditional movies and videos to keep viewers’ attention in the right place of the 360-degree scene. In order to ensure an adequate pace of development, tools are needed to conduct systematic, reliable and objective research on narration in CVR. In the short paper, the authors for the first time fully report results of the initial empirical test of their recently developed Scaled Aggregated Visual Attention Convergence Index (sVRCa). The index utilizes 3D Eye Tracking (3D ET) data recorded during a CVR experience and allows measuring and describing the effectiveness of any system of attentional cues employed by a CVR creator. The results of the initial test are promising. The method seems to substantially augment the process of detection of strong and weak moments in CVR narration.