Panagiotis D. Ritsos

MEng PhD Essex, FHEA

Senior Lecturer in Visualization

XReality, Visualization and
Analytics (XRVA) Lab

Visualization, Data, Modelling and
Graphics (VDMG) research group,

School of Computer Science
and Engineering,

Bangor University,
Dean Street, Bangor,
Gwynedd, UK, LL57 1UT

Display per type | Updated November 2024

2025

A. Srinivasan, J. Ellemose, P. W. S. Butcher, P. D. Ritsos, and N. Elmqvist, “Attention-Aware Visualization: Tracking and Responding to User Perception Over Time,” IEEE Transactions on Visualization and Computer Graphics, 2025. We propose the notion of Attention-Aware Visualizations (AAVs) that track the user’s perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user’s attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user’s gaze on a visualization and its parts; (2) tracking the user’s attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user’s gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.
[Abstract]   [Details]   [PDF]   [Preprint]   [doi:10.1109/TVCG.2024.3456300]   [Presented at IEEE VIS 2024]

J. Jackson, P. D. Ritsos, P. W. S. Butcher, and J. C. Roberts, “Path-based Design Model for Constructing and Exploring Alternative Visualisations,” IEEE Transactions on Visualization and Computer Graphics, 2025. We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design methodology fosters the generatiion of diverse creative concepts, space-filling visualisations, and traditional formats like bar charts, circular plots and pie charts. Through our implementation we showcase the model in action. As an example application, we integrate the output visualisations onto a smartwatch and visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.
[Abstract]   [Details]   [PDF]   [Preprint]   [doi:10.1109/TVCG.2024.3456323]   [Presented at IEEE VIS 2024]

2024

S. Shin, A. Batch, P. W. S. Butcher, P. D. Ritsos, and N. Elmqvist, “The Reality of the Situation: A Survey of Situated Analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 8, pp. 5147–5164, Aug. 2024. The advent of low-cost, accessible, and high-performance augmented reality (AR) has shed light on a situated form of analytics where in-situ visualizations embedded in the real world can facilitate sensemaking based on the user’s physical location. In this work, we identify prior literature in this emerging field with a focus on situated analytics. After collecting 47 relevant situated analytics systems, we classify them using a taxonomy of three dimensions: situating triggers, view situatedness, and data depiction. We then identify four archetypical patterns in our classification using an ensemble cluster analysis. We also assess the level which these systems support the sensemaking process. Finally, we discuss insights and design guidelines that we learned from our analysis.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2023.3285546]   [Presented at IEEE VIS 2023]

P. W. S. Butcher, A. Batch, D. Saffo, B. MacIntyre, N. Elmqvist, and P. D. Ritsos, “Is Native Naïve? Comparing Native Game Engines and WebXR as Immersive Analytics Development Platforms,” IEEE Computer Graphics and Applications, vol. 44, no. 3, pp. 91–98, May 2024. Native game engines have long been the 3D development platform of choice for research in mixed and augmented reality. For this reason they have also been adopted in many immersive visualization and immersive analytics systems and toolkits. However, with the rapid improvements of WebXR and related open technologies, this choice may not always be optimal for future visualization research. In this paper, we investigate common assumptions about native game engines vs. WebXR and find that while native engines still have an advantage in many areas, WebXR is rapidly catching up and is superior for many immersive analytics applications.
[Abstract]   [Details]   [PDF]   [doi:10.1109/MCG.2024.3367422]  

A. Batch, P. W. S. Butcher, P. D. Ritsos, and N. Elmqvist, “Wizualization: A ’Hard Magic’ Visualization System for Immersive and Ubiquitous Analytics,” IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 1, pp. 507–517, Jan. 2024. What if magic could be used as an effective metaphor to perform data visualization and analysis using speech and gestures while mobile and on-the-go? In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using such a magic system through gestures, speech commands, and touch interaction. Wizualization is a rendering system for current XR headsets that comprises several components: a cross-device (or Arcane Focuses) infrastructure for signalling and view control (Weave), a code notebook (SpellBook), and a grammar of graphics for XR (Optomancy). The system offers users three modes of input: gestures, spoken commands, and materials. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data across time and space.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2023.3326580]   [Presented at IEEE VIS 2023]

S. Khan, S. Jones, B. Bach, J. Cha, M. Chen, J. Meikle, J. C. Roberts, J. Thiyagalingam, J. Wood, and P. D. Ritsos, “Feature-Action Design Patterns for Storytelling Visualizations with Time Series Data.” 2024. We present a method to create storytelling visualization with time series data. Many personal decisions nowadays rely on access to dynamic data regularly, as we have seen during the COVID-19 pandemic. It is thus desirable to construct storytelling visualization for dynamic data that is selected by an individual for a specific context. Because of the need to tell data-dependent stories, predefined storyboards based on known data cannot accommodate dynamic data easily nor scale up to many different individuals and contexts. Motivated initially by the need to communicate time series data during the COVID-19 pandemic, we developed a novel computer-assisted method for meta-authoring of stories, which enables the design of storyboards that include feature-action patterns in anticipation of potential features that may appear in dynamically arrived or selected data. In addition to meta-storyboards involving COVID-19 data, we also present storyboards for telling stories about progress in a machine learning workflow. Our approach is complementary to traditional methods for authoring storytelling visualization, and provides an efficient means to construct data-dependent storyboards for different data-streams of similar contexts.
[Abstract]   [Details]   [Available at: arXiv:2402.03116]

D. Archambault, F. McGee, N. Reinoso-Schiller, T. von Landesberger, and S. Scheithauer, “Reflections on Pandemic Visualization (Dagstuhl Seminar 24091),” Dagstuhl Reports, vol. 14, no. 2, pp. 191–205, 2024.
[Details]   [PDF]   [doi:10.4230/DagRep.14.2.191]  

2023

P. Coughlan, R. Bellini, A. Bello-Dambatta, R. Dallison, K. Dreyer-Gibney, J. Gallagher, I. Harris, A. McNabola, D. Mitrovic, M. Murali, D. Novara, S. Patil, A. Rigby, P. Ritsos, I. Schestak, A. Singh, N. Walker, and P. Williams, “Researching Green Process Innovation Across Borders and Boundaries Through Collaborative Inquiry,” The Journal of Applied Behavioral Science, vol. 59, no. 4, pp. 556–584, Aug. 2023. Research involving multistakeholder collaborative partnerships is growing, as both academia and funding agencies align their objectives with societal challenges and undertake research in the context of application. In particular, the UN sustainable development goals mandate green process innovation research that transcends disciplinary boundaries. Responding to this opportunity, this article explores the question: how can researchers, as societal stakeholders, collaborate in the design and implementation of a green process innovation research initiative and produce actionable research-based contributions to knowledge? Drawing upon our shared experience of realizing green process innovation, we describe and conceptualize the collaborative inquiry process, reflecting on the interplay of modes of knowledge production and the complementarity of researchers’ roles. We conclude by noting how researchers collaborating in a green process innovation initiative can shape the environment in which Transdisciplinary research (TDR) develops and play different roles enabling breadth and diversity of interaction, depth of disciplinary integration, and production of different types of knowledge.
[Abstract]   [Details]   [PDF]   [doi:10.1177/00218863231194655]  

A. Batch, S. Shin, J. Liu, P. W. S. Butcher, P. D. Ritsos, and N. Elmqvist, “Evaluating View Management for Situated Visualization in Web-based Handheld AR,” Computer Graphics Forum, vol. 42, no. 3, pp. 349–360, Jun. 2023. As visualization makes the leap to mobile and situated settings, where data is increasingly integrated with the physical world using mixed reality, there is a corresponding need for effectively managing the immersed user’s view of situated visualizations. In this paper we present an analysis of view management techniques for situated 3D visualizations in handheld augmented reality: a shadowbox, a world-in-miniature metaphor, and an interactive tour. We validate these view management solutions through a concrete implementation of all techniques within a situated visualization framework built using a web-based augmented reality visualization toolkit, and present results from a user study in augmented reality accessed using handheld mobile devices.
[Abstract]   [Details]   [PDF]   [doi:10.1111/cgf.14835]   [Presented at EG EuroVis 2023]

P. D. Ritsos, S. Khan, S. Jones, B. Bach, J. Meikle, J. C. Roberts, J. Wood, and M. Chen, “Creating storytelling visualizations for the Covid-19 pandemic using Feature-Action Design Patterns,” in Bulletins presented at the IEEE VIS Workshop on Visualization for Pandemic and Emergency Responses 2023 (Vis4PandEmRes), IEEE Conference on Visualization (IEEE VIS 2023), Melbourne, Australia, 2023. In this bulletin video, we summarize a novel technique for authoring storytelling visualization. The technique was developed by one of the teams in the RAMPVIS project, which provided visualization support to epidemiological modeling during the COVID-19 pandemic. The team explored the prevailing approaches, in the UK and internationally, for creating public-facing visualizations related to the pandemic. This ranged from those produced by a number of governments (e.g., the four home nations in the UK), organizations (e.g., WHO, UK ONS), universities (e.g., Johns Hopkins dashboards), media outlets (e.g., FT Coronavirus tracker), and non-commercial web services (e.g., Worldometers). The team concluded that we should complement, but not duplicate, the existing effort, and defined our goal as to inform the public through advanced storytelling visualization.
[Abstract]   [Details]   [PDF]  

M. Chen, A. Abdul-Rahman, D. Archambault, J. Dykes, P. D. Ritsos, A. Slingsby, T. Torsney-Weir, C. Turkay, B. Bach, R. Borgo, A. Brett, H. Fang, R. Jianu, S. Khan, R. S. Laramee, L. Matthews, P. Nguyen, R. Reeve, J. C. Roberts, F. P. Vidal, Q. Wang, J. Wood, and K. Xu, “RAMPVIS: Answering the Challenges of Building Visualization Capabilities for Large-scale Emergency Responses,” in Bulletins presented at the IEEE VIS Workshop on Visualization for Pandemic and Emergency Responses 2023 (Vis4PandEmRes), IEEE Conference on Visualization (IEEE VIS 2023), Melbourne, Australia, 2023. In this bulletin video, we summarize the volunteering activities of a group of visualization researchers who provided support to epidemiological modeling during the COVID-19 pandemic. Epidemiological modeling during a pandemic is a complex and continuous process. The intraoperative workflow entails different visualization tasks at four different levels, i.e., disseminative, observational, analytical, and model-developmental visualization. The visualization volunteers were organized into seven teams, including a generic support team, an analytical support team, a disseminative visualization team, and four modeling support teams. During the volunteering activities, we encountered a few major challenges. We made an effort to address these challenges and gained useful experience.
[Abstract]   [Details]   [PDF]  

P. W. S. Butcher, A. Batch, P. D. Ritsos, and N. Elmqvist, “Don’t Pull the Balrog — Lessons Learned from Designing Wizualization: a Magic-inspired Data Analytics System in XR,” in HybridUI: 1st Workshop on Hybrid User Interfaces: Complementary Interfaces for Mixed Reality Interaction, 2023. This paper presents lessons learned in the design and development of Wizualization, a ubiquitous analytics system for authoring visualizations in WebXR using a magic metaphor. The system is based on a fundamentally hybrid and multimodal approach utilizing AR/XR, gestures, sound, and speech to support the mobile setting. Our lessons include how to overcome mostly technical challenges, such as view management and combining multiple sessions in the same analytical 3D space, but also user-based, design-oriented, and even social ones. Our intention in sharing these teachings is to help fellow travellers navigate the same troubled waters we have traversed.
[Abstract]   [Details]   [PDF]  

A. Rigby, P. W. S. Butcher, R. Bellini, P. Coughlan, A. Mc Nabola, and P. D. Ritsos, “DUVis: A visual analytics tool for supporting a trans-disciplinary project,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2023), Melbourne, Australia, 2023. We present DUVis, a visual analytics application developed to support the analysis and appraisal, of the transdiciplinary project Dŵr Uisce, from internal project managers and external stakeholders. DUVis provides a number of visualizations and additional features to facilitate data exploration of a project’s progress. It presents a map of stakeholders’ activities, and their engagement with each other, as well as outputs, workpackages, their completion status and potential impact. We present our preliminary design and provide a blueprint for further development.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, H. Alnjar, A. E. Owen, and P. D. Ritsos, “A method for Critical and Creative Visualisation Design-Thinking,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2023), Melbourne, Australia, 2023. Visualisation design requires critical thought: to understand important facets, investigate design suitability and explore alternatives. But, especially for learners, it can be difficult to structure a critical reflection of creative solutions. We introduce the Critical Design Survey (CDS): structured method that facilitates visualisation design analysis through reflective and critical thought. Applying the CDS helps someone to structure critical thought, provides a unified method that can be readily taught, learners can actively engage with the process and directly use it to write a critical-thinking report of their design ideas. The CDS contains three steps: Step 1, summarise and write down the essence of the idea. Step 2, perform an in-depth critique (we define 30 questions structured in six perspectives). Step 3, synthesise the ideas, implications, and decide on the next steps. We present the CDS, describe our design process (critical thinking workshops, talk aloud, and student use), and describe our use in teaching visualisation to undergraduate and postgraduate students.
[Abstract]   [Details]   [PDF]  

R. Bellini, P. Coughlan, A. Bello-Dambatta, A. Rigby, P. D. Ritsos, and A. Mc Nabola, “An interactive visualisation tool to manage metadata in engaged research projects, track progress, map stakeholders, and evaluate output, outcomes and impacts,” in EGU General Assembly, Vienna, Austria, 2023. This paper presents the research management experience of a multi-disciplinary team and their reflections on how they responded to these challenges and implemented working solutions. As a team from five disciplines, we reflect on this shared experience gained over a 6.5 year-long EU-funded project. Stimulated by the project complexity, we came to recognise that how we managed the data provided us with an opportunity to collaborate meaningfully and to link in novel ways the contributions of research activities to the outcomes and impacts of the project. In brief, we devised a new research data management approach through which we collated and visualised the data so as to facilitate deeper exploration of the interactions among the researchers, tasks and deliverables.
[Abstract]   [Details]   [PDF]   [doi:10.5194/egusphere-egu23-2630]  

2022

A. M. F. Rigby, P. W. S. Butcher, P. D. Ritsos, and S. D. Patil, “LUCST: A novel toolkit for Land Use Land Cover change assessment in SWAT+ to support flood management decisions,” Environmental Modelling & Software, vol. 156, no. 105469, Aug. 2022. Land Use Land Cover (LULC) change is widely recognised as one of the most important factors impacting the hydrological response of river basins. SWAT +, the latest version of the Soil and Water Assessment Tool, has been used extensively to assess the hydrological impacts of LULC change. However, the process of making and assessing such changes in SWAT+ is often cumbersome and non-intuitive, thereby reducing its usability amongst a wider pool of applied users. We address this issue by developing a user-friendly toolkit, Land Use Change SWAT+ Toolkit (LUCST), that will: (1) allow the end-user to define various LULC change scenarios in their study catchment, (2) run the SWAT+ model with the specified LULC changes, and (3) enable interactive visualisation of the different SWAT+ output variables. A good System Usability Score (79.8) and positive feedback from end-users promises the potential for adopting LUCST in future LULC change studies.
[Abstract]   [Details]   [PDF]   [doi:10.1016/j.envsoft.2022.105469]  

J. Dykes, A. Abdul-Rahman, D. Archambault, B. Bach, R. Borgo, M. Chen, J. Enright, H. Fang, E. E. Firat, E. Freeman, T. Gönen, C. Harris, R. Jianu, N. W. John, S. Khan, A. Lahiff, R. S. Laramee, L. Matthews, S. Mohr, P. H. Nguyen, A. A. M. Rahat, R. Reeve, P. D. Ritsos, J. C. Roberts, A. Slingsby, B. Swallow, T. Torsney-Weir, C. Turkay, R. Turner, F. P. Vidal, Q. Wang, J. Wood, and K. Xu, “Visualization for Epidemiological Modelling: Challenges, Solutions, Reflections & Recommendations,” Philosophical Transactions of the Royal Society A (Special issue on ’Technical challenges of modelling real-life epidemics and examples of overcoming these’) , vol. 380, no. 2233, p. 20210299, Aug. 2022. We report on an ongoing collaboration between epidemiological modellers and visualization researchers by documenting and reflecting upon knowledge constructs - a series of ideas, approaches and methods taken from existing visualization research and practice – deployed and developed to support modelling of the COVID-19 pandemic. Structured independent commentary on these efforts is synthesized through iterative reflection to develop: evidence of the effectiveness and value of visualization in this context; open problems upon which the research communities may focus; guidance for future activity of this type; and recommendations to safeguard the achievements and promote, advance, secure and prepare for future collaborations of this kind. In describing and comparing a series of related projects that were undertaken in unprecedented conditions, our hope is that this unique report, and its rich interactive supplementary materials, will guide the scientific community in embracing visualization in its observation, analysis and modelling of data as well as in disseminating findings. Equally we hope to encourage the visualization community to engage with impactful science in addressing its emerging data challenges. If we are successful, this showcase of activity may stimulate mutually beneficial engagement between communities with complementary expertise to address problems of significance in epidemiology and beyond.
[Abstract]   [Details]   [PDF]   [Preprint]   [doi:10.1098/rsta.2021.0299]  

M. Chen, A. Abdul-Rahman, D. Archambault, J. Dykes, A. Slingsby, P. D. Ritsos, T. Torsney-Weir, C. Turkay, B. Bach, R. Borgo, A. Brett, H. Fang, R. Jianu, S. Khan, R. S. Laramee, P. H. Nguyen, R. Reeve, J. C. Robert, F. Vidal, Q. Wang, J. Wood, and K. Xu, “RAMPVIS: Answering the Challenges of Building Visualisation Capabilities for Large-scale Emergency Responses,” Epidemics, vol. 39, no. 100569, Jun. 2022. The effort for combating the COVID-19 pandemic around the world has resulted in a huge amount of data, e.g., from testing, contact tracing, modelling, treatment, vaccine trials, and more. In addition to numerous challenges in epidemiology, healthcare, biosciences, and social sciences, there has been an urgent need to develop and provide visualisation and visual analytics (VIS) capacities to support emergency responses under difficult operational conditions. In this paper, we report the experience of a group of VIS volunteers who have been working in a large research and development consortium and providing VIS support to various observational, analytical, model-developmental, and disseminative tasks. In particular, we describe our approaches to the challenges that we have encountered in requirements analysis, data acquisition, visual design, software design, system development, team organisation, and resource planning. By reflecting on our experience, we propose a set of recommendations as the first step towards a methodology for developing and providing rapid VIS capacities to support emergency responses.
[Abstract]   [Details]   [PDF]   [Preprint]   [doi:10.1016/j.epidem.2022.100569]  

J. C. Roberts, P. W. S. Butcher, and P. D. Ritsos, “One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation,” Computers, vol. 11, no. 2, Feb. 2022. The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations.
[Abstract]   [Details]   [PDF]   [doi:10.3390/computers11020020]  

A. M. F. Rigby, P. W. S. Butcher, S. D. Patil, and P. D. Ritsos, “Using AI and big data to optimise land management decisions for reducing river flood risk,” in Data Transformation: Wales Data Nations Accelerator, Cardiff, UK, 2022. Local authorities across Wales are increasingly seeking natural approaches to river flood management, especially the role of land management decisions in reducing peak flows. Physics-based hydrological models, which simulate river flood response to storm events, can provide multi-scenario assessment of land-use changes on floods. However, they require prior calibration of parameters using measured streamflow data, which is not available for many rivers. We investigate how AI and big data can be used to implement hydrological models in river basins with no streamflow data.
[Abstract]   [Details]  

2021

P. W. S. Butcher, N. W. John, and P. D. Ritsos, “VRIA: A Web-based Framework for Creating Immersive Analytics Experiences,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 07, pp. 3213–3225, Jul. 2021. We present <VRIA>, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. <VRIA> is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes <VRIA> ubiquitous and platform-independent. Moreover, by using WebVR’s progressive enhancement, the experiences <VRIA> creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the <VRIA> creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of <VRIA>. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2020.2965109]   [Presented at IEEE VIS 2020]

J. C. Roberts, J. W. Mearman, P. W. S. Butcher, H. M. Al-Maneea, and P. D. Ritsos, “3D Visualisations Should Not be Displayed Alone - Encouraging a Need for Multivocality in Visualisation,” in Proceedings of the Eurographics Conference in Computer Graphics and Visual Computing (CGVC) 2021, Lincoln, UK, 2021. We believe that 3D visualisations should not be used alone; by coincidentally displaying alternative views the user can gain the best understanding of all situations. The different presentations signify manifold meanings and afford different tasks. Natural 3D worlds implicitly tell many stories. For instance, walking into a living room, seeing the TV, types of magazines, pictures on the wall, tells us much about the occupiers: their occupation, standards of living, taste in design, whether they have kids, and so on. How can we similarly create rich and diverse 3D visualisation presentations? How can we create visualisations that allow people to understand different stories from the data? In a multivariate 2D visualisation a developer may coordinate and link many views together to provide exploratory visualisation functionality. But how can this be achieved in 3D and in immersive visualisations? Different visualisation types, each have specific uses, and each has the potential to tell or evoke a different story. Through several use-cases, we discuss challenges of 3D visualisation, and present our argument for concurrent and coordinated visualisations of alternative styles, and encourage developers to consider using alternative representations with any 3D view, even if that view is displayed in a virtual, augmented or mixed reality setup.
[Abstract]   [Details]   [PDF]   [Preprint]   [doi:10.2312/cgvc.20211309]  

A. Rigby, S. Patil, and P. D. Ritsos, “A novel toolkit to streamline Land Use Land Cover change assessment in the SWAT+ model to enhance flood management and infrastructure decisions,” in EGU General Assembly 2021, online event, 2021. Land Use Land Cover (LULC) change is widely recognised as one of the most important factors impacting river basin hydrology. It is therefore imperative that the hydrological impacts of various LULC changes are considered for effective flood management strategies and future infrastructure decisions within a catchment. The Soil and Water assessment Tool (SWAT) has been used extensively to assess the hydrological impacts of LULC change. Areas with assumed homogeneous hydrologic properties, based on their LULC, soil type and slope, make up the basic computational units of SWAT known as the Hydrologic Response Units (HRUs). LULC changes in a catchment are typically modelled by SWAT through alterations to the input files that define the properties of these HRUs. However, to our knowledge at least, the process of making such changes to the SWAT input files is often cumbersome and non-intuitive. This affects the useability of SWAT as a decision support tool amongst a wider pool of applied users (e.g., engineering teams in environmental regulatory agencies and local authorities). In this study, we seek to address this issue by developing a user-friendly toolkit that will: (1) allow the end user to specify, through a Graphical User Interface (GUI), various types of LULC changes at multiple locations within their study catchment, (2) run the SWAT+ model (the latest version of SWAT) with the specified LULC changes, and (3) enable interactive visualisation of the different SWAT+ output variables to quantify the hydrological impacts of these scenarios. Importantly, our toolkit does not require the end user to have any operational knowledge of the SWAT+ model to use it as a decision support tool. Our toolkit will be trialled at 15 catchments in Gwynedd county, Wales, which has experienced multiple occurrences of high flood events, and consequent economic damage, in the recent past. We anticipate this toolkit to be a valuable addition to the decision-making processes of Gwynedd County Council for the planning and development of future flood alleviation schemes as well as other infrastructure projects.
[Abstract]   [Details]   [PDF]   [doi:10.5194/egusphere-egu21-4139]  

J. C. Roberts, P. D. Ritsos, L. Kunchev, F. Vidal, I. S. Lim, L. ap Cennyd, C. Teahan William J. an Gray, and D. Perkins, “Visualisation Data Modelling Graphics (VDMG) at Bangor,” in Eurographics 2021 - Projects and Labs, 2021. The Visualisation Data Modelling & Graphics (VDMG) research group at Bangor University brings together researchers in visualisation, modelling, data-mining and Artificial Intelligence. Our vision is to help people understand data, depict it visually and deliver enjoyable experiences. We design, develop and evaluate computing solutions that often incorporate AI, machine learning, interaction, underpinned with advanced computing, and are always user-focused. Located in Bangor University – a civic University on the North Wales shoreline that is close to the Snowdonia mountain range and National Park – much of our research is inspired by nature, motivated to be sustainable, and people focused.
[Abstract]   [Details]   [PDF]  

2020

C. C. Gray, D. Perkins, and P. D. Ritsos, “Degree Pictures: Visualizing the university student journey,” Assessment & Evaluation in Higher Education, vol. 20, no. 4, pp. 568–578, Aug. 2020. The field of learning analytics is progressing at a rapid rate. New tools, with ever-increasing number of features and a plethora of datasets that are increasingly utilized demonstrate the evolution and multifaceted nature of the field. In particular, the depth and scope of insight that can be gleaned from analysing related datasets can have a significant, and positive, effect in educational practices. We introduce the concept of degree pictures, a symbolic overview of students’ achievement. Degree pictures are small visualizations that depict graphically 16 categories of overall student achievement, over the duration of a higher education course. They offer a quick summary of students’ achievement and are intended to initiate appropriate responses, such as teaching and pastoral interventions. This can address the subjective nature of assessment, by providing a method for educators to calibrate their own marking practices by showing an overview of any cohort. We present a prototype implementation of degree pictures, which was evaluated within our School of Computer Science, with favourable results.
[Abstract]   [Details]   [PDF]   [doi:10.1080/02602938.2019.1676397]  

B. Williams, P. D. Ritsos, and C. Headleand, “Virtual Forestry Generation: Evaluating Models for Tree Placement in Games,” Computers, vol. 9, no. 1, Mar. 2020. A handful of approaches have been previously proposed to generate procedurally virtual forestry for virtual worlds and computer games, including plant growthmodels and point distribution methods. However, there has been no evaluation to date which assesses how effective these algorithms are at modelling real-world phenomena. In this paper we tackle this issue by evaluating three algorithms used in the generation of virtual forests – a randomly uniform point distribution method (control), a plant competition model, and an iterative random point distribution technique.Our results show that a plant competition model generated more believable content when viewed from an aerial perspective. Interestingly however, we also found that a randomly uniform point distribution method produced forestry which was rated higher in playability and photorealism, when viewed from a first-person perspective. We conclude that the objective of the game designer is important to consider when selecting an algorithm to generate forestry, as the algorithms produce forestry which is perceived differently.
[Abstract]   [Details]   [PDF]   [doi:10.3390/computers9010020]  

J. C. Roberts and P. D. Ritsos, “Critical Thinking Sheet (CTS) for Design Thinking in Programming Courses,” in Eurographics 2020 - Education Papers, 2020. We present a quick design process, which encourages learners to sketch their design, reflect on the main algorithm and consider how to implement it. In-depth design processes have their advantages, but often are not practical within the time given to the student, and may not fit the learning outcomes of the module. Without any planning students often jump into coding without contemplating what they will do, leading to failure or poor design. Our single-sheet method, allows the learners to critically think of the challenge and decompose the problem into several subproblems (the appearance, functionality and algorithmic steps of the solution). We have successfully used this technique for three years in a second year computer graphics module, for undergraduate degree students studying Computer Science. We present our method, explain how we use it with second year computer graphics students, and discuss student’s experiences with the method
[Abstract]   [Details]   [PDF]   [doi:10.2312/eged.20201029]  

R. L. Williams, D. Farmer, J. C. Roberts, and P. D. Ritsos, “Immersive visualisation of COVID-19 UK travel and US happiness data,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2020), Virtual Event, 2020. The global COVID-19 pandemic has had great affect on the lives of everyone, from changing how children are educated to how or whether at all, we travel, go to work or do our shopping. Consequently, not only has people’s happiness changed throughout the pandemic, but there has been less vehicles on the roads. We present work to visualise both US happiness and UK travel data, as examples, in immersive environments. These impromptu visualisations encourage discussion and engagement with these topics, and can help people see the data in an alternative way.
[Abstract]   [Details]   [PDF]  

D. Dave Perkins, C. C. Gray, P. D. R. Ritsos, and L. I. Kuncheva, “JISC/Bangor University Learning Analytics Project Summary & Case Study,” JISC, UK & Bangor University, Commissioned Report, 2020. Insights into activities we undertake as educators and students have the potential to enhance learning and reduce unintentional consequences for all. Educators have for a long time used data to monitor students and grade them. More recently additional yet still traditional metrics have been added to the available tools in every day education. The latest generation of information are derived metrics with additional intelligence. This project has developed a Work Pressure metric than can be used by both educator and learner. The focus is on the assessments for a given programme and Work Pressure that this generates. Additionally, included is behavioural characteristics, these have the potential to have significant impact upon the individual student journey.
[Abstract]   [Details]   [PDF]  

2019

B. R. Williams, P. D. Ritsos, and C. Headleand, “Evaluating Models for Virtual Forestry Generation and Tree Placement in Games,” in Proceedings of the Eurographics Conference in Computer Graphics and Visual Computing (CGVC) 2019, Bangor, UK, 2019. A handful of approaches have been previously proposed to generate procedurally virtual forestry for virtual worlds and computer games, including plant growth models and point distribution methods. However, there has been no evaluation to date which assesses how effective these algorithms are at modelling real-world phenomena. In this paper we tackle this issue by evaluating three algorithms used in the generation of virtual forests – a randomly uniform point distribution method (control), a plant competition model, and an iterative random point distribution technique. Our results show that a plant competition model generated more believable content when viewed from an aerial perspective. We also found that a randomly uniform point distribution method produced forest visualisations which were rated highest in playability and photorealism, when viewed from a first-person perspective. Our results indicate that when it comes to believability, the relationship between viewing perspective and procedural generation algorithm is more important than previously thought.
[Abstract]   [Details]   [PDF]   [doi:10.2312/cgvc.20191259]   [   Best Student Paper]

J. R. Jackson, P. D. Ritsos, and J. C. Roberts, “Towards a tool for the creation of micro-visualisations,” in Proceedings of the Eurographics Conference in Computer Graphics and Visual Computing (CGVC) 2019, Bangor, UK, 2019. As the every day use of mobile and small screen devices becomes more common, it is necessary to explore how we can visualise data effectively in small design spaces. These screens are often used in situations where it is necessary to convey information in a concise, readable, reliable and visually appealing way. Our work focuses on the design and development of a tool to facilitate the creation and manipulation of new micro-visualisations. The results show that the tool is suitable for creating large number of outputs quickly and efficiently.
[Abstract]   [Details]   [PDF]   [doi:10.2312/cgvc.20191270]  

S. C. Edwards and P. D. Ritsos, “A Framework for Modelling Human Emotion,” in Workshop on Computational Modeling in Human-Computer Interaction, CHI Conference on Human Factors in Computing Systems (ACM CHI 2019), Glasgow, UK, 2019. This paper describes the design of a modular framework, for constructing models of interacting systems. In particular, systems that can adapt and have different objectives; we also consider that these objectives could be of an emotional/hedonistic form. To that end, we introduce Pask’s conversation theory, and Boyd’s thoughts on decision making under uncertainty. In conclusion we describe modes of studying interacting systems.
[Abstract]   [Details]   [PDF]  

J. W. Mearman, P. W. S. Butcher, P. D. Ritsos, and J. C. Roberts, “Tangible papercraft visualisations for education,” in Workshop on Troubling Innovation: Craft and Computing Across Boundaries Workshop, CHI Conference on Human Factors in Computing Systems (ACM CHI 2019), Glasgow, UK, 2019. We have been exploring how papercraft can be used to create ‘data physicalisations’ of student data, which act as physical artefacts and data sculptures that can be used in discussions. Papercrafting is cheap and quick to produce, and easily disposed of. Papercrafting student data is powerful as it acts as a focal point for discussions about the progression of their students and the effects of any extenuating circumstances. During such meetings teachers often reference spreadsheets and dashboard visualisations to explore the data. They focus and shift their attention to individual students, often commenting on individual performance and circumstances in turn. Tangible depictions, such as the ones we present, can be passed around, facilitating discussions, and can act as a focal-point for conversation. We present several prototypes and discuss our design process.
[Abstract]   [Details]   [PDF]  

J. C. Roberts and P. D. Ritsos, “Critical Thinking Sheets: Encouraging critical thought and sketched implementation design,” in EduCHI 2019 Symposium: Global Perspectives on HCI Education, CHI Conference on Human Factors in Computing Systems (ACM CHI 2019), Glasgow, UK, 2019. Learners are often asked to create an interface as part of their course. For example, they could be asked to “create a calculator”, “develop a stopwatch” or “develop an image processing app”. But students often struggle to know how to start. At the same time, teachers want their students to think critically about their assignments and plan how they will build an interface. We have developed, and used for two academic years, a structured “critical thinking sheet (CTS)”. It is a method to help students consider a problem from different views, and help them critically consider different aspects of the task. The sheet gets the learners to (1) sketch the solution, (2) explain the challenge, (3) detail system components, (4) list algorithmic steps, and (5) explain next steps and issues of implementation. In this paper we introduce the sheet, explain how we have used it, and discuss learner experience.
[Abstract]   [Details]   [PDF]  

P. W. S. Butcher, N. W. John, and P. D. Ritsos, “VRIA - A Framework for Immersive Analytics on the Web,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2019), Glasgow, UK, 2019. We report on the design, implementation and evaluation of VRIA, a framework for building immersive analytics (IA) solutions in Web-based Virtual Reality (VR), built upon WebVR, A-Frame, React and D3. The recent emergence of affordable VR interfaces have reignited the interest of researchers and developers in exploring new, immersive ways to visualize data. In particular, the use of open-standards web-based technologies for implementing VR in a browser facilitates the ubiquitous and platform-independent adoption of IA systems. Moreover, such technologies work in synergy with established visualization libraries, through the HTML document object model (DOM). We discuss high-level features of VRIA and present a preliminary user experience evaluation of one of our use cases.
[Abstract]   [Details]   [PDF]   [doi:10.1145/3290607.3312798]  

2018

N. W. John, S. R. Pop, T. W. D. Day, P. D. Ritsos, and C. J. Headleand, “The Implementation and Validation of a Virtual Environment for Training Powered Wheelchair Manoeuvres,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 5, pp. 1867–1878, May 2018. Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5% then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2017.2700273]   [Presented at IEEE VR 2018]

J. C. Roberts, P. D. Ritsos, J. Jackson, and C. Headleand, “The explanatory visualization framework: An active learning framework for teaching creative computing using explanatory visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 1, pp. 791–801, Jan. 2018. Visualizations are nowadays appearing in popular media and are used everyday in the workplace. This democratisation of visualization challenges educators to develop effective learning strategies, in order to train the next generation of creative visualization specialists. There is high demand for skilled individuals who can analyse a problem, consider alternative designs, develop new visualizations, and be creative and innovative. Our three-stage framework, leads the learner through a series of tasks, each designed to develop different skills necessary for coming up with creative, innovative, effective, and purposeful visualizations. For that, we get the learners to create an explanatory visualization of an algorithm of their choice. By making an algorithm choice, and by following an active-learning and project-based strategy, the learners take ownership of a particular visualization challenge. They become enthusiastic to develop good results and learn different creative skills on their learning journey.
[Abstract]   [Details]   [PDF]   [doi: 10.1109/TVCG.2017.2745878]   [Presented at IEEE VIS 2017]

S. Rizou, K. Kenda, D. Kofinas, N. M. Mellios, P. Pergar, P. D. Ritsos, J. Vardakas, K. Kalaboukas, C. Laspidou, M. Senožetnik, and A. Spyropoulou, “Water4Cities: An ICT platform enabling Holistic Surface Water and Groundwater Management for Sustainable Cities,” in Proceedings of 3rd EWaS International Conference, Lefkada, Greece, 2018. To enable effective decision-making at the entire city level, both surface water and groundwater should be viewed as part of the extended urban water ecosystem with its spatiotemporal availability, quantity, quality and competing uses being taken into account. The Water4Cities project aims to build an ICT solution for the monitoring, visualization and analysis of urban water at a holistic urban setting to provide added-value decision support services to multiple water stakeholders. This paper presents the main stakeholders identified, the overall approach and the target use cases, where Water4Cities platform will be tested and validated.
[Abstract]   [Details]   [PDF]  

D. Varghese, J. C. Roberts, and P. D. Ritsos, “Developing a formative visual feedback report for data brokering,” in Workshop on Visual Summarization and Report Generation, IEEE Conference on Visualization (IEEE VIS 2018), Berlin, Germany, 2018. We present the development of a visualisation framework, used to provide formative feedback to clients who engage with data brokering companies. Data brokers receive, clean, store and re-sell data from many clients. However the usage of the data and the brokering process can be improved at source by enhancing the client’s data creation and management processes. We propose to achieve this through providing formative feedback, as a visualisation report, to the client. Working closely with a travel agent data broker, we present a three-part framework, where we (1) evaluate data creation and provision processes of the client, (2) develop metrics for quantitative analytics on the data, (3) aggregate the analytics in a visual report.
[Abstract]   [Details]   [PDF]  

K. Kenda, S. Rizou, N. Mellios, D. Kofinas, P. D. Ritsos, M. Senozetnik, and C. Laspidou, “Smart Water Management for Cities,” in Fragile Earth: Theory Guided Data Science to Enhance Scientific Discovery Workshop of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD2018), 2018. The deployment of real-world water monitoring and analytics tools is still far behind the growing needs of cities, which are facing constant urbanisation and overgrowth of the population. This paper presents a full-stack data-mining infrastructure for smart water management for cities being developed within Water4Cities project. The stack is tested in two use cases - Greek island of Skiathos and Slovenian capital Ljubljana, each facing its own challenges related to groundwater. Bottom layer of the platform provides data gathering and provision infrastructure based on IoT standards. The layer is enriched with a dedicated missing data imputation infrastructure, which supports coherent analysis of long-term impacts of urbanisation and population growth on groundwater reserves. Data-driven approach to groundwater levels analysis, which is important for decision support in flood and groundwater management, has shown promising results and could replace or complement traditional process-driven models. Data visualization capabilities of the platform expose powerful synergies with data mining and contribute significantly to the design of future decision support systems in water management for cities.
[Abstract]   [Details]   [PDF]  

J. Jackson, P. D. Ritsos, and J. C. Roberts, “Creating Small Unit Based Glyph Visualisations,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2018), Berlin, Germany, 2018. Many modern day tasks involve the use of small screens, where users want to see a summary visualisation of an activity. For example, a runner using a smart watch needs to quickly view their progress, heart rate, comparison to previous races, etc. Subsequently, there is a need to portray data to users in small, yet well-defined, spaces. We define this space to be a single self-contained “unit”. In this paper we introduce a glyph visualisation algorithm that creates a diverse range of visualisation designs; each design contains many separate parts, whereupon different parameters can be mapped. Our algorithm uses a path based approach which allows designers to create deterministic, yet unique designs, in a unit space to display multivariate data.
[Abstract]   [Details]   [PDF]  

P. W. S. Butcher, N. W. John, and P. D. Ritsos, “Towards a Framework for Immersive Analytics on the Web,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2018), Berlin, Germany, 2018. We present work-in-progress on the design and implementation of a Web framework for building Immersive Analytics (IA) solutions in Virtual Reality (VR). We outline the design of our prototype framework, VRIA, which facilitates the development of VR spaces for IA solutions, which can be accessed via a Web browser. VRIA is built on emerging open-standards Web technologies such as WebVR, A-Frame and React, and supports a variety of interaction devices (e.g., smartphones, head-mounted displays etc.). We elaborate on our motivation for focusing on open-standards Web technologies and provide an overview of our framework. We also present two early visualization components. Finally, we outline further extensions and investigations.
[Abstract]   [Details]   [PDF]  

P. R. Lewis, C. J. Headleand, S. Battle, and P. D. Ritsos, Eds., Artificial Life and Intelligent Agents, vol. 732. Springer International Publishing, 2018. This book constitutes the refereed proceedings of the First International Symposium on Artificial Life and Intelligent Agents, ALIA 2014, held in Bangor, UK, in November 2014. The 10 revised full papers were carefully reviewed and selected from 20 submissions. The papers are organized in topical sections on learning and evolution; human interaction; robotic simulation.
[About]   [Details]   [ISBN:978-3-319-90418-4]  

2017

P. W. Butcher and P. D. Ritsos, “Building Immersive Data Visualizations for the Web,” in Proceedings of International Conference on Cyberworlds (CW’17), Chester, UK, 2017. We present our early work on building prototype applications for Immersive Analytics using emerging standardsbased web technologies for VR. For our preliminary investigations we visualize 3D bar charts that attempt to resemble recent physical visualizations built in the visualization community. We explore some of the challenges faced by developers in working with emerging VR tools for the web, and in building effective and informative immersive 3D visualizations.
[Abstract]   [Details]   [PDF]   [doi:10.1109/CW.2017.11]  

J. C. Roberts, P. D. Ritsos, and C. Headleand, “Experience and Guidance for the use of Sketching and low-fidelity Visualisation-design in teaching,” in Pedagogy of Data Visualization Workshop, IEEE Conference on Visualization (VIS), Phoenix, Arizona, USA, 2017. We, like other educators, are keen to develop the next generation of visualisation designers. The use of sketching and low-fidelity designs are becoming popular methods to help developers and students consider many alternative ideas and plan what they should build. But especially within an education setting, there are often many issues that challenge students as they create low-fidelity prototypes. Students can be unwilling to contemplate alternatives, reluctant to use pens and paper, or sketch on paper, and inclined to code the first idea in their mind. In this paper we discuss these issues, and investigate strategies to help increase the breadth of low-fidelity designs, especially for developing data-visualisation tools. We draw together experiences and advice of how we have used the Five Design-Sheets method over eight years, for different assessment styles and across two institutions. This paper would be useful for anyone who wishes to use sketching in their teaching, or to improve their own experiences.
[Abstract]   [Details]   [PDF]  

J. Pereda, P. Murietta-Flores, P. D. Ritsos, and J. C. Roberts, “Tangible User Interfaces as a Pathway for Information Visualisation for Low Digital Literacy in the Digital Humanities,” in 2nd Workshop on Visualization for the Digital Humanities, IEEE Conference on Visualization (VIS), Phoenix, Arizona, USA, 2017. Information visualisation has become a key element for empowering users to answer and produce new questions, make sense and create narratives about specific sets of information. Current technologies, such as Linked Data, have changed how researchers and professionals in the Humanities and the Heritage sector engage with information. Digital literacy is of concern in many sectors, but is especially of concern for Digital Humanities. This is due to the fact that the Humanities and Heritage sector face an important division based on digital literacy that produce gaps in the way research can be carried out. One way to overcome the challenge of digital literacy and improve access to information can be Tangible User Interfaces (TUIs), which allow a more meaningful and natural pathway for a wide range of users. TUIs make use of physical objects to interact with the computer. In particular, they can facilitate the interaction process between the user and a data visualisation system. This position paper discusses the opportunity to engage with Digital Humanities information via TUIs and data visualisation tools, offering new ways to analyse, investigate and interpret the past.
[Abstract]   [Details]   [PDF]  

P. D. Ritsos, J. Mearman, J. R. Jackson, and J. C. Roberts, “Synthetic Visualizations in Web-based Mixed Reality,” in Immersive Analytics: Exploring Future Visualization and Interaction Technologies for Data Analytics Workshop, IEEE Conference on Visualization (VIS), Phoenix, Arizona, USA, 2017. The way we interact with computers is constantly evolving, with technologies like Mixed/Augmented Reality (MR/AR) and the Internet of Things (IoT) set to change our perception of informational and physical space. In parallel, interest for interacting with data in new ways is driving the investigation of the synergy of these domains with data visualization. We are seeking new ways to contextualize, visualize, interact-with and interpret our data. In this paper we present the notion of Synthetic Visualizations, which enable us to visualize in situ, data embedded in physical objects, using MR. We use a combination of established ‘markers’, such as Quick Response Codes (QR Codes) and Augmented Reality Markers (AR Markers), not only to register objects in physical space, but also to contain data to be visualized, and interchange the type of visualization to be used. We visualize said data in Mixed Reality (MR), using emerging web-technologies and open-standards.
[Abstract]   [Details]   [PDF]  

P. D. Ritsos, J. Jackson, and J. C. Roberts, “Web-based Immersive Analytics in Handheld Augmented Reality,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2017), Phoenix, Arizona, USA, 2017. The recent popularity of virtual reality (VR), and the emergence of a number of affordable VR interfaces, have prompted researchers and developers to explore new, immersive ways to visualize data. This has resulted in a new research thrust, known as Immersive Analytics (IA). However, in IA little attention has been given to the paradigms of augmented/mixed reality (AR/MR), where computer-generated and physical objects co-exist. In this work, we explore the use of contemporary web-based technologies for the creation of immersive visualizations for handheld AR, combining D3.js with the open standards-based Argon AR framework and A-frame/WebVR. We argue in favor of using emerging standards-based web technologies as they work well with contemporary visualization tools, that are purposefully built for data binding and manipulation.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, C. J. Headleand, and P. D. Ritsos, Five Design-Sheets: Creative Design and Sketching for Computing and Visualisation. Springer, 2017. This book describes a structured sketching methodology to help you create alternative design ideas and sketch them on paper. The Five Design-Sheet method acts as a check-list of tasks, to help you think through the problem, create new ideas and to reflect upon the suitability of each idea. To complement the FdS method, we present practical sketching techniques, discuss problem solving, consider professional and ethical issues of designing interfaces, and work through many examples. Five Design-Sheets: Creative Design and Sketching for Computing and Visualization is useful for designers of computer interfaces, or researchers needing to explore alternative solutions in any field. It is written for anyone who is studying on a computing course and needs to design a computing-interface or create a well-structured design chapter for their dissertation, for example. We do acknowledge that throughout this book we focus on the creation of interactive software tools, and use the case study of building data-visualization tools. We have however, tried to keep the techniques general enough such that it is beneficial for a wide range of people, with different challenges and different situations, and for different applications.
[About]   [Details]   [ISBN:978-3319556260]  

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Half-day Tutorial on Sketching Visualization designs, and using the Five Design-Sheet (FdS) Methodology in Teaching,” in Tutorials of at the IEEE Conference on Visualization (IEEE VIS 2017), Phoenix, AZ, USA, 2017. This tutorial leads attendees through sketching designs following the Five Design-Sheet methodology (FdS) and discusses how it can be used in teaching. The first part (before the break) will introduce the FdS, place it in context with other methods, discuss creative thinking and different problem types, explain the benefit of sketching designs, and provide a worked example of the FdS. The second part (after the break) focuses on using the FdS in teaching in Higher Education We give examples of students’ work, and discuss issues and challenges of using sketching for designing and prototyping in teaching, followed by a question and answer session.
[Abstract]   [Details]   [PDF]  

2016

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Sketching Designs Using the Five Design-Sheet Methodology,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 419–428, Jan. 2016. Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.
[Abstract]   [Details]   [PDF]   [doi:10.1109/TVCG.2015.2467271]   [Presented at IEEE VIS 2015]

C. J. Headleand, T. Day, S. R. Pop, P. D. Ritsos, and N. W. John, “A Cost-Effective Virtual Environment for Simulating and Training Powered Wheelchairs Manoeuvres,” Proceedings of NextMed/MMVR22, Los Angeles, USA, 2016. Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution. We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.
[Abstract]   [Details]   [PDF]   [PMID:27046566]  

J. C. Roberts, J. W. Mearman, P. D. Ritsos, H. C. Miles, A. T. Wilson, D. Perkins, J. R. Jackson, B. Tiddeman, F. Labrosse, B. Edwards, and R. Karl, “Immersive Analytics and Deep Maps – the Next Big Thing for Cultural Heritage & Archaeology,” in Visualization for Digital Humanities Workshop, IEEE Conference on Visualization (VIS), Baltimore, MD, USA, 2016. Archaeologists and cultural heritage experts explore complex multifaceted data that is often highly interconnected. We argue for new ways to interact with this data. Such data analysis provides a ‘grand challenge’ for computer science and heritage researchers, it is big Data, multi-dimensional, multi-typed, contains uncertain information, and the questions posed by researchers are often ill-defined (where it is difficult to guarantee an answer). We present two visions (Immersive Analytics, and Deep Mapping) as solutions to allow both expert users and the general public to interact and explore heritage data. We use pre-historic data as a case study, and discuss key technologies that need to develop further, to help accomplish these two visions.
[Abstract]   [Details]   [PDF]  

M. R. Edwards, S. R. Pop, N. W. John, P. D. Ritsos, and N. Avis, “Real-Time Guidance and Anatomical Information by Image Projection onto Patients,” in Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM), 2016. The Image Projection onto Patients (IPoP) system is work in progress intended to assist medical practitioners perform procedures such as biopsies, or provide a novel anatomical education tool, by projecting anatomy and other relevant information from the operating room directly onto a patient’s skin. This approach is not currently used widely in hospitals but has the benefit of providing effective procedure guidance without the practitioner having to look away from the patient. Developmental work towards the alpha-phase of IPoP is presented including tracking methods for tools such as biopsy needles, patient tracking, image registration and problems encountered with the multi-mirror effect.
[Abstract]   [Details]   [PDF]   [doi:10.2312/vcbm.20161270]  

P. W. S. Butcher, J. C. Roberts, and P. D. Ritsos, “Immersive Analytics with WebVR and Google Cardboard,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. We present our initial investigation of a low-cost, web-based virtual reality platform for immersive analytics, using a Google Cardboard, with a view of extending to other similar platforms such as Samsung’s Gear VR. Our prototype uses standards-based emerging frameworks, such as WebVR and explores some the challenges faced by developers in building effective and informative immersive 3D visualizations, particularly those that attempt to resemble recent physical visualizations built in the community.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, J. Jackson, C. Headleand, and P. D. Ritsos, “Creating Explanatory Visualizations of Algorithms for Active Learning,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. Visualizations have been used to explain algorithms to learners, in order to help them understand complex processes. These ‘explanatory visualizations’ can help learners understand computer algorithms and data-structures. But most are created by an educator and merely watched by the learner. In this paper, we explain how we get learners to plan and develop their own explanatory visualizations of algorithms. By actively developing their own visualizations learners gain a deeper insight of the algorithms that they are explaining. These depictions can also help other learners understand the algorithm.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Sketching Designs for Data-Visualization using the Five Design-Sheet Methodology,” in Tutorials of at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. The tutorial will be useful for anyone who has to create visualization interfaces, and needs to think through different potential ways to display their data. At the end of the tutorial participants will understand techniques to help them be more structured in their ideation. They will be able to sketch interface designs using the Five Design Sheet methodology (FdS). While we know that some developers have started to use the Five Design-Sheet methodology, but this tutorial will start from the beginning and be suitable for any attendee. More information and resources are found on http://fds.design.
[Abstract]   [Details]   [PDF]  

2015

H. C. Miles, A. T. Wilson, F. Labrosse, B. Tiddeman, S. Griffiths, B. Edwards, P. D. Ritsos, J. W. Mearman, K. Möller, R. Karl, and J. C. Roberts, “Alternative Representations of 3D-Reconstructed Heritage Data,” ACM Journal on Computing and Cultural Heritage (JOCCH), vol. 9, no. 1, pp. 4:1–4:18, Nov. 2015. By collecting images of heritage assets from members of the public and processing them to create 3D-reconstructed models, the HeritageTogether project has accomplished the digital recording of nearly 80 sites across Wales, UK. A large amount of data has been collected and produced in the form of photographs, 3D models, maps, condition reports, and more. Here we discuss some of the different methods used to realize the potential of this data in different formats and for different purposes. The data are explored in both virtual and tangible settings, and—with the use of a touch table—a combination of both. We examine some alternative representations of this community-produced heritage data for educational, research, and public engagement applications.
[Abstract]   [Details]   [PDF]   [doi:10.1145/2795233]  

C. C. Gray, P. D. Ritsos, and J. C. Roberts, “Contextual Network Navigation; Situational Awareness for Network Administrators,” in IEEE Symposium on Visualization for Cyber Security (VizSec), Chicago, IL, USA, 2015. One of the goals of network administrators is to identify and block sources of attacks from a network steam. Various tools have been developed to help the administrator identify the IP or subnet to be blocked, however these tend to be non-visual. Having a good perception of the wider network can aid the administrator identify their origin, but while network maps of the Internet can be useful for such endeavors, they are difficult to construct, comprehend and even utilize in an attack, and are often referred to as being “hairballs”. We present a visualization technique that displays pathways back to the attacker; we include all potential routing paths with a best-efforts identification of the commercial relationships involved. These two techniques can potentially highlight common pathways and/or networks to allow faster, more complete resolution to the incident, as well as fragile or incomplete routing pathways to/from a network. They can help administrators re-profile their choice of IP transit suppliers to better serve a target audience.
[Abstract]   [Details]   [PDF]   [doi:10.1109/VIZSEC.2015.7312769]  

J. C. Roberts, C. Headleand, D. Perkins, and P. D. Ritsos, “Personal Visualisation for Learning,” in Personal Visualization: Exploring Data in Everyday Life Workshop, IEEE Conference on Visualization (VIS), Chicago, IL, USA, 2015. Learners have personal data, such as grades, feedback and statistics on how they fair or compare with the class. But, data focusing on their personal learning is lacking, as it does not get updated regularly (being updated at the end of a taught session) and the displayed information is generally a single grade. Consequently, it is difficult for students to use this information to adapt their behavior, and help them on their learning journey. Yet, there is a rich set of data that could be captured and help students learn better. What is required is dynamically, regularly updated personal data, that is displayed to students in a timely way. Such ‘personal data’ can be presented to the student through ‘personal visualizations’ that engender ‘personal learning’. In this paper we discuss our journey into developing learning systems and our resulting experience with learners. We present a vision, to integrate new technologies and visualization solutions, in order to encourage and develop personal learning that employs the visualization of personal learning data.
[Abstract]   [Details]   [PDF]  

C. H. Headleand, L. ap Cenydd, L. Priday, P. D. Ritsos, J. C. Roberts, and W. Teahan, “Anthropomorphisation of Software Agents as a Persuasive Tool,” in Understanding Persuasion: HCI as a Medium for Persuasion Workshop, British HCI, 2015. In this position paper, we make an argument for the anthropomorphism of software agents as a persuasive tool. We begin by discussing some of the relevant applications, before providing a brief introduction to the CASA theory of social interaction with computers. We conclude by describing a selection of the evidence for anthropomorphism, and an argument for further research into this area.
[Abstract]   [Details]   [PDF]  

C. C. Gray, J. C. Roberts, and P. D. Ritsos, “Where Can I Go From Here? Drawing Contextual Navigation Maps of the London Underground,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2015), Chicago, IL, USA, 2015. Network administrators often wish to ascertain where network attackers are located; therefore it would be useful to display the network map from the context of either the attacker’s potential location or the attacked host. As part of a bigger project we are investigating how to best visualize contextual network data. We use a dataset of station adjacencies with journey times as edge weights, to explore which visualization design is most suitable, and also ascertain the best network shortest-path metric. This short paper presents our initial findings, and a visualization for Contextual Navigation using circular, centered-phylogram projections of the network. Our visualizations are interactive allowing users to explore different scenarios and observe relative distances in the data.
[Abstract]   [Details]   [PDF]  

C. J. Headleand, T. Day, S. R. Pop, P. D. Ritsos, and N. W. John, “Challenges and Technologies for Low Cost Wheelchair Simulation,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2015. The use of electric wheelchairs is inherently risky, as collisions due to lack of control can result in injury for the user, but also potentially for other pedestrians. Introducing new users to powered chairs via virtual reality (VR) provides one possible solution, as it eliminates the risks inherent to the real world during training. However, traditionally simulator technology has been too expensive to make VR a financially viable solution. Also, current simulators lack the natural interaction possible in the real world, limiting their operational value. We present the early stages of a VR, electric wheelchair simulator built using low-cost, consumer level gaming hardware. The simulator makes use use of the the Leap Motion, to provide a level of interaction with the virtual world which has not previously been demonstrated in wheelchair training simulators. Furthermore, the Occulous Rift provides an immersive experience suitable for our training application
[Abstract]   [Details]   [PDF]   [doi:10.2312/vcbm.20151225]  

P. D. Ritsos, M. R. Edwards, I. S. Shergill, and N. W. John, “A Haptics-enabled Simulator for Transperineal Ultrasound-Guided Biopsy,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2015. We present the development of a transperineal prostate biopsy, with high fidelity haptic feedback. We describe our current prototype, which is using physical props and a Geomagic Touch. In addition, we discuss a method for collecting in vitro axial needle forces, for programming haptic feedback, along with implemented an forthcoming features such as a display of 2D ultrasonic images for targeting, biopsy needle bending, prostate bleeding and calcification. Our ultimate goal is to provide an affordable high-fidelity simulation by integrating contemporary off-the-shelf technology components.
[Abstract]   [Details]   [PDF]   [doi:10.2312/vcbm.20151229]  

2014

J. C. Roberts, P. D. Ritsos, S. K. Badam, D. Brodbeck, J. Kennedy, and N. Elmqvist, “Visualization Beyond the Desktop - the next big thing,” IEEE Computer Graphics and Applications, vol. 34, no. 6, pp. 26–34, Nov. 2014. Visualization is coming of age. With visual depictions being seamlessly integrated into documents, and data visualization techniques being used to understand increasingly large and complex datasets, the term "visualization"’ is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today’s new devices and tomorrow’s technology. Today, people interact with visual depictions through a mouse. Tomorrow, they’ll be touching, swiping, grasping, feeling, hearing, smelling, and even tasting data. The next big thing is multisensory visualization that goes beyond the desktop.
[Abstract]   [Details]   [PDF]   [doi:10.1109/MCG.2014.82]   [Presented at IEEE VIS 2015]

R. L. S. F. George, P. E. Robins, A. G. Davies, P. D. Ritsos, and J. C. Roberts, “Interactive visual analytics of hydrodynamic flux for the coastal zone,” Environmental Earth Sciences, vol. 72, no. 10, pp. 3753–3766, Nov. 2014. Researchers wish to study the potential impact of sea level rise from climate change, and visual analytic tools can allow scientists to visually examine and explore different possible scenarios from simulation runs. In particular, hydrodynamic flux is calculated to understand the net movement of water; but typically this calculation is tedious and is not easily achieved with traditional visualization and analytic tools. We present a visual analytic method that incorporates a transect profiler and flux calculator. The analytic software is incorporated into our visual analytics tool Vinca, and generates multiple transects, which can be visualized and analysed in several alternative visualizations; users can choose specific transects to compare against real-world data; users can explore how flux changes within a domain. In addition, we report how ocean scientists have used our tool to display multiple-view views of their data and analyse hydrodynamic flux for the coastal zone.
[Abstract]   [Details]   [PDF]   [doi:10.1007/s12665-014-3283-9]  

P. D. Ritsos, J. W. Mearman, A. Vande Moere, and J. C. Roberts, “Sewn with Ariadne’s Thread - Visualizations for Wearable & Ubiquitous Computing,” in Death of the Desktop Workshop, IEEE Conference on Visualization (VIS), Paris, France, 2014. Lance felt a buzz on his wrist, as Alicia, his wearable, informed him via the bone-conduction ear-piece - ‘You have received an email from Dr Jones about the workshop’. His wristwatch displayed an unread email glyph icon. Lance tapped it and listened to the voice of Dr Jones, talking about the latest experiment. At the same time he scanned through the email attachments, projected in front of his eyes, through his contact lenses. One of the files had a dataset of a carbon femtotube structure
[Abstract]   [Details]   [PDF]  

J. C. Roberts, J. W. Mearman, and P. D. Ritsos, “The desktop is dead, long live the desktop! – Towards a multisensory desktop for visualization,” in Death of the Desktop Workshop, IEEE Conference on Visualization (VIS), Paris, France, 2014. “Le roi est mort, vive le roi!”; or “The King is dead, long live the King” was a phrase originally used for the French throne of Charles VII in 1422, upon the death of his father Charles VI. To stave civil unrest the governing figures wanted perpetuation of the monarchs. Likewise, while the desktop as-we-know-it is dead (the use of the WIMP interface is becoming obsolete in visualization) it is being superseded by a new type of desktop environment: a multisensory visualization space. This space is still a personal workspace, it’s just a new kind of desk environment. Our vision is that data visualization will become more multisensory, integrating and demanding all our senses (sight, touch, audible, taste, smell etc.), to both manipulate and perceive the underlying data and information.
[Abstract]   [Details]   [PDF]  

P. D. Ritsos, A. T. Wilson, H. C. Miles, L. F. Williams, B. Tiddeman, F. Labrosse, S. Griffiths, B. Edwards, K. Möller, R. Karl, and J. C. Roberts, “Community-driven Generation of 3D and Augmented Web Content for Archaeology,” in Eurographics Workshop on Graphics and Cultural Heritage (EGGCH) - Short Papers and Posters, Darmstadt, Germany, 2014, pp. 25–28. Heritage sites (such as prehistoric burial cairns and standing stones) are prolific in Europe; although there is a wish to scan each of these sites, it would be time-consuming to achieve. Citizen science approaches enable us to involve the public to perform a metric survey by capturing images. In this paper, discussing work-in progress, we present our automatic process that takes the user’s uploaded photographs, converts them into 3D models and displays them in two presentation platforms – in a web gallery application, using X3D/X3DOM, and in mobile augmented reality, using awe.js
[Abstract]   [Details]   [PDF]   [doi:10.2312/gch.20141321]  

P. D. Ritsos and J. C. Roberts, “Towards more Visual Analytics in Learning Analytics,” in EuroVis Workshop on Visual Analytics (EuroVA), Swansea, UK, 2014, pp. 61–65. Learning Analytics is the collection, management and analysis of students’ learning. It is used to enable teachers to understand how their students are progressing and for learners to ascertain how well they are performing. Often the data is displayed through dashboards. However, there is a huge opportunity to include more comprehensive and interactive visualizations that provide visual depictions and analysis throughout the lifetime of the learner, monitoring their progress from novices to experts. We therefore encourage researchers to take a comprehensive approach and re-think how visual analytics can be applied to the learning environment, and develop more interactive and exploratory interfaces for the learner and teacher.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, R. T. Walker, L. Roberts, R. S. Laramee, and P. D. Ritsos, “Exploratory Visualization through Copy, Cut and Paste,” in Posters presented at the IEEE Conference on Visualization (VIS), November 9-14, Paris, France, 2014. Our goal is to help oceanographers to visualize and navigate their data over several runs. We have been using parallel coordinate plots to display every data value. Through our copy, cut, paste interactions we aim to enable users to drill-down into specific data points and to explore the datasets in a more expressive way. The method allows users to manipulate the PCP on a ZUI canvas, take copies of the current PCP and paste different subset views.
[Abstract]   [Details]   [PDF]  

R. L. S. F. George, P. D. Ritsos, and J. C. Roberts, “Interactive Oceanographic Visualization using spatially-aggregated Parallel Coordinate Plots,” in Posters presented at EuroVis 2014, June 9-13 , Swansea, Wales, UK, 2014. Visual Analytics interfaces allow ocean scientists to interactively investigate and compare different runs and parameterizations. However, oceanographic models are complex, temporal and the datasets that are generated are huge. Parallel Coordinate Plots can help explore multivariate data such as ocean-science data. Common issues with traditional PCPs of clutter and performance inhibit interactive spatial exploration. We describe techniques that aggregates the PCP based on the spatial nature of the data and we render the polylines as ranges.
[Abstract]   [Details]   [PDF]  

P. D. Ritsos, “Mixed Reality - A paradigm for perceiving synthetic spaces,” in Real Virtuality, M. Reiche and U. Gehmann, Eds. Transcript-Verlag Bielefeld, 2014, pp. 283–310. As our life becomes more intertwined with technology our capabilities in inter-acting and communicating with each other take a new form. In the distant past we relied on posted letters and postcards to contact each other, often requiring a lot of days for the correspondence to reach the intended recipient. Our perception of distance from each other – and therefore our world as space – changed with the introduction of telephony. Communicating with distant relatives was easier, albeit associated with physically being present in front of a telephone and, there-fore, still locus dependent. Mobile telephony brought even further immediacy of communication. Space matters even less now. We are either within network cov-erage – but maybe in the cinema and unavailable – or somewhere with poor re-ception. From being miles and days apart, we now feel like we are mere seconds apart. Our perception of space changes, as our friends and family seem closer, despite the fact they may be, physically, in a location that a century ago would take us weeks to reach with posted mail.
[About]   [Details]   [ISBN:978-3-8376-2608-7]  

2013

P. D. Ritsos, R. Gittins, S. Braun, C. Slater, and J. C. Roberts, “Training Interpreters using Virtual Worlds,” in Transactions on Computational Science XVIII, vol. 7848, Springer Berlin Heidelberg, 2013, pp. 21–40. With the rise in population migration there has been an increased need for professional interpreters who can bridge language barriers and operate in a variety of fields such as business, legal, social and medical. Interpreters require specialized training to cope with the idiosyncrasies of each field and their potential clients need to be aware of professional parlance. We present ‘Project IVY’. In IVY, users can make a selection from over 30 interpreter training scenarios situated in the 3D virtual world. Users then interpret the oral interaction of two avatar actors. In addition to creating different 3D scenarios, we have developed an asset management system for the oral files and permit users (mentors of the training interpreters) to easily upload and customize the 3D environment and observe which scenario is being used by a student. In this article we present the design and development of the IVY Virtual Environment and the asset management system. Finally we make discussion over our plans for further development.
[Abstract]   [Details]   [PDF]   [doi:10.1007/978-3-642-38803-3_2]  

S. A. Panëels, P. D. Ritsos, P. J. Rodgers, and J. C. Roberts, “Prototyping 3D haptic data visualizations,” Computers and Graphics, vol. 37, no. 3, pp. 179–192, May 2013. Haptic devices are becoming more widely used as hardware becomes available and the cost of both low and high fidelity haptic devices decreases. One of the application areas of haptics is haptic data visualization (HDV). HDV provides functionality by which users can feel and touch data. Blind and partially sighted users can benefit from HDV, as it helps them manipulate and understand information. However, developing any 3D haptic world is difficult, time-consuming and requires skilled programmers. Therefore, systems that enable haptic worlds to be rapidly developed in a simple environment could enable non-computer skilled users to create haptic 3D interactions. In this article we present HITPROTO: a system that enables users, such as mentors or support workers, to quickly create haptic interactions (with an emphasis on HDVs) through a visual programming interface. We describe HITPROTO and include details of the design and implementation. We present the results of a detailed study using postgraduate students as potential mentors, which provides evidence of the usability of HITPROTO. We also present a pilot study of HITPROTO with a blind user. It can be difficult to create prototyping tools and support 3D interactions, therefore we present a detailed list of ‘lessons learnt’ that provides a set of guidelines for developers of other 3D haptic prototyping tools.
[Abstract]   [Details]   [PDF]   [doi:10.1016/j.cag.2013.01.009]  

P. D. Ritsos, S. A. Panëels, P. J. Rodgers, and J. C. Roberts, “Towards a Formalized Process for Creating Haptic Data Visualizations,” in Posters presented at the IEEE Conference on Visualization (VIS), October 15-18, Atlanta, Georgia, USA, 2013. Haptic Data Visualization (HDV) is a novel application of haptics. It provides functionality by which users touch and feel data, making it a useful tool for users with vision impairments. However, creating such visualizations usually requires programming knowledge, that support workers and tutors of blind users may not possess. To address this issue we propose a formalized process for creating HDVs using the HITPROTO [5] toolkit, which requires no programming experience. We further illustrate this process using an example HDV.
[Abstract]   [Details]   [PDF]  

J. C. Roberts, L. ap Cenydd, P. D. Ritsos, R. George, W. Teahan, and R. Walker, “Visual Analytics with Storyboarding to engender multivocality and comprehension of Microblog data for Crisis Management,” in The Information Systems Technology Panel Symposium on Visual Analytics (IST-116/RSY-028), Shrivenham, UK, 2013.
[Details]   [PDF]  

S. Braun, C. Slater, R. Gittins, P. D. Ritsos, and J. C. Roberts, “Interpreting in Virtual Reality: designing and developing a 3D virtual world to prepare interpreters and their clients for professional practice,” in New Prospects and Perspectives for Educating Language Mediators, D. Kiraly, S. Hansen-Schirra, and K. Maksymski, Eds. Tuebingen : Gunter Narr, 2013, pp. 93–120. This paper reports on the conceptual design and development of an avatar-based 3D virtual environment in which trainee interpreters and their potential clients (e.g. students and professionals from the fields of law, business, tourism, medicine) can explore and simulate professional interpreting practice. The focus is on business and community interpreting and hence the short consecutive and liaison interpreting modes. The environment is a product of the European collaborate project IVY (Interpreting in Virtual Reality). The paper begins with a state-of-the-art overview of the current uses of ICT in interpreter training (section 2), with a view to showing how the IVY environment has evolved out of existing knowledge of these uses, before exploring how virtual worlds are already being used for pedagogical purposes in fields related to interpreting (section 3). Section 4 then shows how existing knowledge about learning in virtual worlds has fed into the conceptual design of the IVY environment and introduces that environment, its working modes and customised digital content. This is followed by an analysis of the initial evaluation feedback on the first environment prototype (section 5), a discussion of the main pedagogical implications (section 6) and concluding remarks (section 7). The more technical aspects of the IVY environment are described in Ritsos et al. (2012).
[Abstract]   [Details]   [ISBN:978-3-8233-6819-9]  

P. D. Ritsos, N. W. John, and J. C. Roberts, “Standards in Augmented Reality: Towards Prototyping Haptic Medical AR,” in 8th International AR Standards Meeting, 2013. Augmented Reality technology has been used in medical visualization applications in various different ways. Haptics, on the other hand, are a popular method of interacting in Augmented and Virtual Reality environments. We present how reliance on standards benefits the fusion of these technologies, through a series of research themes, carried out in Bangor University, UK (and international partners), as well as within the activities domain of the Research Institute of Visual Computing (RIVIC), UK.
[Abstract]   [Details]   [PDF]  

2012

P. D. Ritsos, R. Gittins, J. C. Roberts, S. Braun, and C. Slater, “Using Virtual Reality for Interpreter-mediated Communication and Training,” in Proceedings of International Conference on Cyberworlds (CW’12), Darmstadt, Germany, 2012, pp. 191–198. As international businesses adopt social media and virtual worlds as mediums for conducting international business, so there is an increasing need for interpreters who can bridge the language barriers, and work within these new spheres. The recent rise in migration (within the EU) has also increased the need for professional interpreters in business, legal, medical and other settings. Project IVY attempts to provide bespoke 3D virtual environments that are tailor made to train interpreters to work in the new digital environments, responding to this increased demand. In this paper we present the design and development of the IVY Virtual Environment. We present past and current design strategies, our implementation progress and our future plans for further development.
[Abstract]   [Details]   [PDF]   [doi:10.1109/CW.2012.34]  

2011

P. D. Ritsos, D. P. Ritsos, and A. S. Gougoulis, “Standards for Augmented Reality: a User Experience perspective,” in 2nd International AR Standards Meeting, 2011. An important aspect of designing and implementing Augmented Reality (AR) applications and services, often disregarded for the sake of simplicity and speed, is the evaluation of such systems, particularly from non-expert users, in real operating conditions. We are strong advocates of the fact that in order to develop successful and highly immersive AR systems, that can be adopted in day-today scenarios, user assessment and feedback is of paramount importance. Consequently, we also feel that an important fragment of future AR Standardisation should focus on User eXperience (UX) aspects, such as the sense of presence, ergonomics, health and safety, overall usability and product identification. Our paper attempts an examination of these aspects and proposes an adaptive theoretical evaluation framework than can be standardised across the span of AR applications.
[Abstract]   [Details]   [PDF]  

2006

P. D. Ritsos, Architectures for Untethered Augmented Reality Using Wearable Computers, Ph.D. dissertation, Dept. Elect systems Engineering, University of Essex, 2006

2003

P. D. Ritsos, D. J. Johnston, C. Clark, and A. F. Clark, “Engineering an augmented reality tour guide,” in Eurowearable, 2003. IEE, Birmingham, UK, 2003, pp. 119–124. This paper describes a mobile augmented reality system intended for in situ reconstructions of archaeological sites, The evolution of the system from proof of concept to something approaching a satisfactory ergonomic design is described, as are the various approaches to achieving real-time rendering performance from the accompanying software. Finally, some comments are made concerning the accuracy of such systems.
[Abstract]   [Details]   [PDF]   [doi:10.1049/ic:20030157]  

2002

P. D. Ritsos, D. J. Johnston, C. Clark, and A. F. Clark, “Engineering an augmented reality tour guide,” in Eurowearable, 2003. IEE, Birmingham, UK, 2003, pp. 119–124. This paper describes a mobile augmented reality system intended for in situ reconstructions of archaeological sites, The evolution of the system from proof of concept to something approaching a satisfactory ergonomic design is described, as are the various approaches to achieving real-time rendering performance from the accompanying software. Finally, some comments are made concerning the accuracy of such systems.
[Abstract]   [Details]   [PDF]   [doi:10.1049/ic:20030157]  

 

Press

[Go to Top]

Talks & Presentations

  • "Visualization Beyond the Desktop: Immersed in Data, Anywhere, Anytime", Virtual Worlds Symposium, Staffordshire University, Staffordshire, UK, Jun. 2023
  • "STEM and Student Engagement in HE", CELT Conference - Celebrating Excellence in Learning And Teaching, Bangor, Gwynedd, UK, Feb. 2018
  • "Virtual Reality Demonstrator of an Advanced Boiling Water Reactor (VRABWR)", BWR Hub Conference, Bangor, Gwynedd, UK, Feb. 2018
  • "Visualization Beyond the Desktop - the Next Big Thing", Love Data Week, Bangor University, Bangor, Gwynedd, UK, Feb. 2018
  • "Synthetic Visualizations in Web-based Mixed Reality", Immersive Analytics: Exploring Future Visualization and Interaction Technologies for Data Analytics Workshop, IEEE Conference on Visualization (VIS), Phoenix, Arizona, USA, Oct. 2017
  • "Visualization Beyond the Desktop - the next big thing", IEEE Conference on Visualization (VIS 2015), Invited CG&A papers, Chicago, Illinois, USA, Oct. 2015
  • "Visualization Beyond the Desktop - the next big thing", Research Seminar, University of Chester, Chester, UK, Oct. 2015
  • "Sewn with Ariadne’s Thread – Visualizations for Wearable & Ubiquitous Computing", Death of the Desktop Workshop, IEEE Conference on Visualization (VIS 2014), Paris, Nov. 2014
  • "Towards more Visual Analytics in Learning Analytics", Fifth EuroVis Workshop on Visual Analytics (EuroVA), Eurographics Association, Swansea, UK, Jun. 2014
  • "Evaluating Interpreting in Virtual Reality", New Computer Technologies - Animation and Games Workshop, Bangor University, UK, May 2014
  • "Excitement of VisWeek 2013", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Nov. 2013
  • "Haptic Data Visualization”,Visualization and Medical Graphics Group Seminars, Bangor University, UK, Oct. 2013
  • "WeARable Computing - From the Qing Dynasty to Project Glass: Prototypes, Myths, Confusion and Lots of Wires...", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Mar. 2013
  • "Project IVY – Interpreting in Virtual Reality", IVY Dissemination Symposium: Exploiting Emerging Technologies to Prepare Interpreters and their Clients for Professional Practice, Kia Oval, London, UK, Nov. 2012
  • "Project IVY – Interpreting in Virtual Reality", Virtual Learning Technologies 2012, Bangor University, UK, Oct. 2012
  • "Interpreting in Virtual Reality", Virtual Worlds Education Forum, Staffordshire University, UK, Mar. 2012
  • "Project IVY – Interpreting in Virtual Reality – Virtual Environment Development", Creating Second Lives 2011: Blurring Boundaries, Bangor University, UK, Sep. 2011
  • "Project IVY – Interpreting in Virtual Reality – Virtual Environment Development", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Sep. 2011

Vocational & Training Seminars

  • P. D. Ritsos, P. W. S. Butcher, "XReality for Aerospace", Airbus Training Seminars, Skills Factory Programme, Online, UK, Feb. 2022
  • P. D. Ritsos, M. Drakos, C. Vasilatos and N. Fountas, "ActionStreamer System Administrator Training", Training Seminars, Intracom-Telecom S.A., Athens, Greece, Jul. 2010
  • P. D. Ritsos, "Hellas OnLine (HOL) Mediation System Administrator Training",Training Seminars, Hellas OnLine S.A., Athens, Greece, Jul. 2010