Journal paper accepted in IEEE TVCG / VIS2023

Motivating scenario for Wizualization

Our paper “Wizualization: A ``Hard Magic’’ Visualization System for Immersive and Ubiquitous Analytics” has been accepted for publication at IEEE VIS 2023 and will appear in IEEE Transactions on Visualization and Computer Graphics.

In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using a system based on a ‘hard-magic’ metaphor, through gestures, speech commands, and touch interaction. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data, across time and space. We describe our system using a viginette-based scenario, the actors of which are shown in our teaser, above.

Reference

A. Batch, P. W. S. Butcher, P. D. Ritsos, and N. Elmqvist, “Wizualization: A ’Hard Magic’ Visualization System for Immersive and Ubiquitous Analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 1, pp. 507–517, 2024. What if magic could be used as an effective metaphor to perform data visualization and analysis using speech and gestures while mobile and on-the-go? In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using such a magic system through gestures, speech commands, and touch interaction. Wizualization is a rendering system for current XR headsets that comprises several components: a cross-device (or Arcane Focuses) infrastructure for signalling and view control (Weave), a code notebook (SpellBook), and a grammar of graphics for XR (Optomancy). The system offers users three modes of input: gestures, spoken commands, and materials. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data across time and space.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2023.3326580]   [Presented at IEEE VIS 2023]