Panagiotis D. Ritsos

MEng PhD Essex, FHEA

Lecturer in Computer Science

Visualization, Modelling and
Graphics (VMG) research group,

School of Computer Science,
Bangor University,
Dean Street, Bangor,
Gwynedd, UK, LL57 1UT

Find our book on Amazon:

fdsBook

Updated August 2017

Journals

J. C. Roberts, P. D. Ritsos, J. Jackson, and C. Headleand, “The explanatory visualization framework: An active learning framework for teaching creative computing using explanatory visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. x, no. x, pp. xx-xx, Jan. 2018. Visualizations are nowadays appearing in popular media and are used everyday in the workplace. This democratisation of visualization challenges educators to develop effective learning strategies, in order to train the next generation of creative visualization specialists. There is high demand for skilled individuals who can analyse a problem, consider alternative designs, develop new visualizations, and be creative and innovative. Our three-stage framework, leads the learner through a series of tasks, each designed to develop different skills necessary for coming up with creative, innovative, effective, and purposeful visualizations. For that, we get the learners to create an explanatory visualization of an algorithm of their choice. By making an algorithm choice, and by following an active-learning and project-based strategy, the learners take ownership of a particular visualization challenge. They become enthusiastic to develop good results and learn different creative skills on their learning journey.
[Abstract] [PDF] [Video] [doi: 10.1109/TVCG.2017.2745878] [BibTeX]

N. W. John, S. R. Pop, T. W. D. Day, P. D. Ritsos, and C. J. Headleand, “The Implementation and Validation of a Virtual Environment for Training Powered Wheelchair Manoeuvres,” IEEE Transactions on Visualization and Computer Graphics, vol. PP, no. 99, pp. 1–1, 2017. Navigating a powered wheelchair and avoiding collisions is often a daunting task for new wheelchair users. It takes time and practice to gain the coordination needed to become a competent driver and this can be even more of a challenge for someone with a disability. We present a cost-effective virtual reality (VR) application that takes advantage of consumer level VR hardware. The system can be easily deployed in an assessment centre or for home use, and does not depend on a specialized high-end virtual environment such as a Powerwall or CAVE. This paper reviews previous work that has used virtual environments technology for training tasks, particularly wheelchair simulation. We then describe the implementation of our own system and the first validation study carried out using thirty three able bodied volunteers. The study results indicate that at a significance level of 5% then there is an improvement in driving skills from the use of our VR system. We thus have the potential to develop the competency of a wheelchair user whilst avoiding the risks inherent to training in the real world. However, the occurrence of cybersickness is a particular problem in this application that will need to be addressed.
[Abstract] [PDF] [doi:10.1109/TVCG.2017.2700273] [BibTeX]

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Sketching Designs Using the Five Design-Sheet Methodology,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 419–428, Jan. 2016. Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.
[Abstract] [PDF] [Video] [doi:10.1109/TVCG.2015.2467271] [BibTeX]

H. C. Miles, A. T. Wilson, F. Labrosse, B. Tiddeman, S. Griffiths, B. Edwards, P. D. Ritsos, J. W. Mearman, K. Möller, R. Karl, and J. C. Roberts, “Alternative Representations of 3D-Reconstructed Heritage Data,” ACM Journal on Computing and Cultural Heritage (JOCCH), vol. 9, no. 1, pp. 4:1–4:18, Nov. 2015. By collecting images of heritage assets from members of the public and processing them to create 3D-reconstructed models, the HeritageTogether project has accomplished the digital recording of nearly 80 sites across Wales, UK. A large amount of data has been collected and produced in the form of photographs, 3D models, maps, condition reports, and more. Here we discuss some of the different methods used to realize the potential of this data in different formats and for different purposes. The data are explored in both virtual and tangible settings, and—with the use of a touch table—a combination of both. We examine some alternative representations of this community-produced heritage data for educational, research, and public engagement applications.
[Abstract] [PDF] [doi:10.1145/2795233] [BibTeX]

J. C. Roberts, P. D. Ritsos, S. K. Badam, D. Brodbeck, J. Kennedy, and N. Elmqvist, “Visualization Beyond the Desktop - the next big thing,” IEEE Computer Graphics and Applications, vol. 34, no. 6, pp. 26–34, Nov. 2014. Visualization is coming of age. With visual depictions being seamlessly integrated into documents, and data visualization techniques being used to understand increasingly large and complex datasets, the term "visualization"’ is becoming used in everyday conversations. But we are on a cusp; visualization researchers need to develop and adapt to today’s new devices and tomorrow’s technology. Today, people interact with visual depictions through a mouse. Tomorrow, they’ll be touching, swiping, grasping, feeling, hearing, smelling, and even tasting data. The next big thing is multisensory visualization that goes beyond the desktop.
[Abstract] [PDF] [Video] [doi:10.1109/MCG.2014.82] [BibTeX]

R. L. S. F. George, P. E. Robins, A. G. Davies, P. D. Ritsos, and J. C. Roberts, “Interactive visual analytics of hydrodynamic flux for the coastal zone,” Environmental Earth Sciences, vol. 72, no. 10, pp. 3753–3766, Nov. 2014. Researchers wish to study the potential impact of sea level rise from climate change, and visual analytic tools can allow scientists to visually examine and explore different possible scenarios from simulation runs. In particular, hydrodynamic flux is calculated to understand the net movement of water; but typically this calculation is tedious and is not easily achieved with traditional visualization and analytic tools. We present a visual analytic method that incorporates a transect profiler and flux calculator. The analytic software is incorporated into our visual analytics tool Vinca, and generates multiple transects, which can be visualized and analysed in several alternative visualizations; users can choose specific transects to compare against real-world data; users can explore how flux changes within a domain. In addition, we report how ocean scientists have used our tool to display multiple-view views of their data and analyse hydrodynamic flux for the coastal zone.
[Abstract] [PDF] [doi:10.1007/s12665-014-3283-9] [BibTeX]

P. D. Ritsos, R. Gittins, S. Braun, C. Slater, and J. C. Roberts, “Training Interpreters using Virtual Worlds,” in Transactions on Computational Science XVIII, vol. 7848, Springer Berlin Heidelberg, 2013, pp. 21–40. With the rise in population migration there has been an increased need for professional interpreters who can bridge language barriers and operate in a variety of fields such as business, legal, social and medical. Interpreters require specialized training to cope with the idiosyncrasies of each field and their potential clients need to be aware of professional parlance. We present ‘Project IVY’. In IVY, users can make a selection from over 30 interpreter training scenarios situated in the 3D virtual world. Users then interpret the oral interaction of two avatar actors. In addition to creating different 3D scenarios, we have developed an asset management system for the oral files and permit users (mentors of the training interpreters) to easily upload and customize the 3D environment and observe which scenario is being used by a student. In this article we present the design and development of the IVY Virtual Environment and the asset management system. Finally we make discussion over our plans for further development.
[Abstract] [PDF] [doi:10.1007/978-3-642-38803-3_2] [BibTeX]

S. A. Panëels, P. D. Ritsos, P. J. Rodgers, and J. C. Roberts, “Prototyping 3D haptic data visualizations,” Computers and Graphics, vol. 37, no. 3, pp. 179–192, 2013. Haptic devices are becoming more widely used as hardware becomes available and the cost of both low and high fidelity haptic devices decreases. One of the application areas of haptics is haptic data visualization (HDV). HDV provides functionality by which users can feel and touch data. Blind and partially sighted users can benefit from HDV, as it helps them manipulate and understand information. However, developing any 3D haptic world is difficult, time-consuming and requires skilled programmers. Therefore, systems that enable haptic worlds to be rapidly developed in a simple environment could enable non-computer skilled users to create haptic 3D interactions. In this article we present HITPROTO: a system that enables users, such as mentors or support workers, to quickly create haptic interactions (with an emphasis on HDVs) through a visual programming interface. We describe HITPROTO and include details of the design and implementation. We present the results of a detailed study using postgraduate students as potential mentors, which provides evidence of the usability of HITPROTO. We also present a pilot study of HITPROTO with a blind user. It can be difficult to create prototyping tools and support 3D interactions, therefore we present a detailed list of ‘lessons learnt’ that provides a set of guidelines for developers of other 3D haptic prototyping tools.
[Abstract] [PDF] [doi:10.1016/j.cag.2013.01.009] [BibTeX]

Conference Papers (refereed)

P. W. Butcher and P. D. Ritsos, “Building Immersive Data Visualizations for the Web,” in Proceedings of International Conference on Cyberworlds (CW’17), Chester, UK, 2017. We present our early work on building prototype applications for Immersive Analytics using emerging standardsbased web technologies for VR. For our preliminary investigations we visualize 3D bar charts that attempt to resemble recent physical visualizations built in the visualization community. We explore some of the challenges faced by developers in working with emerging VR tools for the web, and in building effective and informative immersive 3D visualizations.
[Abstract] [PDF] [URL] [BibTeX]

C. J. Headleand, T. Day, S. R. Pop, P. D. Ritsos, and N. W. John, “A Cost-Effective Virtual Environment for Simulating and Training Powered Wheelchairs Manoeuvres,” Proceedings of NextMed/MMVR22, 2016. Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution. We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.
[Abstract] [PDF] [URL] [BibTeX]

C. C. Gray, P. D. Ritsos, and J. C. Roberts, “Contextual Network Navigation; Situational Awareness for Network Administrators,” in IEEE Symposium on Visualization for Cyber Security (VizSec), Chicago, IL, USA, 2015. One of the goals of network administrators is to identify and block sources of attacks from a network steam. Various tools have been developed to help the administrator identify the IP or subnet to be blocked, however these tend to be non-visual. Having a good perception of the wider network can aid the administrator identify their origin, but while network maps of the Internet can be useful for such endeavors, they are difficult to construct, comprehend and even utilize in an attack, and are often referred to as being “hairballs”. We present a visualization technique that displays pathways back to the attacker; we include all potential routing paths with a best-efforts identification of the commercial relationships involved. These two techniques can potentially highlight common pathways and/or networks to allow faster, more complete resolution to the incident, as well as fragile or incomplete routing pathways to/from a network. They can help administrators re-profile their choice of IP transit suppliers to better serve a target audience.
[Abstract] [PDF] [URL] [BibTeX]

J. C. Roberts, L. ap Cenydd, P. D. Ritsos, R. George, W. Teahan, and R. Walker, “Visual Analytics with Storyboarding to engender multivocality and comprehension of Microblog data for Crisis Management,” in The Information Systems Technology Panel Symposium on Visual Analytics (IST-116/RSY-028), 2013.
[Abstract] [PDF] [BibTeX]

P. D. Ritsos, R. Gittins, J. C. Roberts, S. Braun, and C. Slater, “Using Virtual Reality for Interpreter-mediated Communication and Training,” in Cyberworlds (CW), 2012 International Conference on, 2012, pp. 191–198. As international businesses adopt social media and virtual worlds as mediums for conducting international business, so there is an increasing need for interpreters who can bridge the language barriers, and work within these new spheres. The recent rise in migration (within the EU) has also increased the need for professional interpreters in business, legal, medical and other settings. Project IVY attempts to provide bespoke 3D virtual environments that are tailor made to train interpreters to work in the new digital environments, responding to this increased demand. In this paper we present the design and development of the IVY Virtual Environment. We present past and current design strategies, our implementation progress and our future plans for further development.
[Abstract] [PDF] [doi:10.1109/CW.2012.34] [BibTeX]

P. D. Ritsos, D. J. Johnston, C. Clark, and A. F. Clark, “Engineering an augmented reality tour guide,” in Eurowearable, 2003. IEE, 2003, pp. 119–124. This paper describes a mobile augmented reality system intended for in situ reconstructions of archaeological sites, The evolution of the system from proof of concept to something approaching a satisfactory ergonomic design is described, as are the various approaches to achieving real-time rendering performance from the accompanying software. Finally, some comments are made concerning the accuracy of such systems.
[Abstract] [PDF] [BibTeX]

P. D. Ritsos, D. J. Johnston, C. Clark, and A. F. Clark, “Mobile augmented reality archaeological reconstruction,” in 30th UNESCO Anniversary Virtual Congress: Architecture, Tourism and World Heritage, 2002. This paper describes a mobile augmented reality system, based around a wearable computer, that can be used to provide in situ tours around archæological sites. Ergonomic considerations, intended to make the system usable by the general public, are described. The accuracy of the system is assessed, as this is critical for in situ reconstructions.
[Abstract] [PDF] [BibTeX]

Workshop Papers (refereed)

J. C. Roberts, P. D. Ritsos, and C. Headleand, “Experience and Guidance for the use of Sketching and low-fidelity Visualisation-design in teaching,” in Pedagogy of Data Visualization Workshop, IEEE Conference on Visualization (VIS), October 1-6, Phoenix, Arizona, USA, 2017, 2017. We, like other educators, are keen to develop the next generation of visualisation designers. The use of sketching and low-fidelity designs are becoming popular methods to help developers and students consider many alternative ideas and plan what they should build. But especially within an education setting, there are often many issues that challenge students as they create low-fidelity prototypes. Students can be unwilling to contemplate alternatives, reluctant to use pens and paper, or sketch on paper, and inclined to code the first idea in their mind. In this paper we discuss these issues, and investigate strategies to help increase the breadth of low-fidelity designs, especially for developing data-visualisation tools. We draw together experiences and advice of how we have used the Five Design-Sheets method over eight years, for different assessment styles and across two institutions. This paper would be useful for anyone who wishes to use sketching in their teaching, or to improve their own experiences.
[Abstract] [PDF] [URL] [BibTeX]

J. Pereda, P. Murietta-Flores, P. D. Ritsos, and J. C. Roberts, “Tangible User Interfaces as a Pathway for Information Visualisation for Low Digital Literacy in the Digital Humanities,” in 2nd Workshop on Visualization for the Digital Humanities, IEEE Conference on Visualization (VIS), October 1-6, Phoenix, Arizona, USA, 2017, 2017. Information visualisation has become a key element for empowering users to answer and produce new questions, make sense and create narratives about specific sets of information. Current technologies, such as Linked Data, have changed how researchers and professionals in the Humanities and the Heritage sector engage with information. Digital literacy is of concern in many sectors, but is especially of concern for Digital Humanities. This is due to the fact that the Humanities and Heritage sector face an important division based on digital literacy that produce gaps in the way research can be carried out. One way to overcome the challenge of digital literacy and improve access to information can be Tangible User Interfaces (TUIs), which allow a more meaningful and natural pathway for a wide range of users. TUIs make use of physical objects to interact with the computer. In particular, they can facilitate the interaction process between the user and a data visualisation system. This position paper discusses the opportunity to engage with Digital Humanities information via TUIs and data visualisation tools, offering new ways to analyse, investigate and interpret the past.
[Abstract] [PDF] [URL] [BibTeX]

P. D. Ritsos, J. Mearman, J. R. Jackson, and J. C. Roberts, “Synthetic Visualizations in Web-based Mixed Reality,” in Immersive Analytics: Exploring Future Visualization and Interaction Technologies for Data Analytics Workshop, IEEE Conference on Visualization (VIS), October 1-6, Phoenix, Arizona, USA, 2017, 2017. The way we interact with computers is constantly evolving, with technologies like Mixed/Augmented Reality (MR/AR) and the Internet of Things (IoT) set to change our perception of informational and physical space. In parallel, interest for interacting with data in new ways is driving the investigation of the synergy of these domains with data visualization. We are seeking new ways to contextualize, visualize, interact-with and interpret our data. In this paper we present the notion of Synthetic Visualizations, which enable us to visualize in situ, data embedded in physical objects, using MR. We use a combination of established ‘markers’, such as Quick Response Codes (QR Codes) and Augmented Reality Markers (AR Markers), not only to register objects in physical space, but also to contain data to be visualized, and interchange the type of visualization to be used. We visualize said data in Mixed Reality (MR), using emerging web-technologies and open-standards.
[Abstract] [PDF] [URL] [BibTeX]

J. C. Roberts, J. W. Mearman, P. D. Ritsos, H. C. Miles, A. T. Wilson, D. Perkins, J. R. Jackson, B. Tiddeman, F. Labrosse, B. Edwards, and R. Karl, “Immersive Analytics and Deep Maps – the Next Big Thing for Cultural Heritage & Archaeology,” in Visualization for Digital Humanities Workshop, IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. Archaeologists and cultural heritage experts explore complex multifaceted data that is often highly interconnected. We argue for new ways to interact with this data. Such data analysis provides a ‘grand challenge’ for computer science and heritage researchers, it is big Data, multi-dimensional, multi-typed, contains uncertain information, and the questions posed by researchers are often ill-defined (where it is difficult to guarantee an answer). We present two visions (Immersive Analytics, and Deep Mapping) as solutions to allow both expert users and the general public to interact and explore heritage data. We use pre-historic data as a case study, and discuss key technologies that need to develop further, to help accomplish these two visions.
[Abstract] [PDF] [URL] [BibTeX]

M. R. Edwards, S. R. Pop, N. W. John, P. D. Ritsos, and N. Avis, “Real-Time Guidance and Anatomical Information by Image Projection onto Patients,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2016. The Image Projection onto Patients (IPoP) system is work in progress intended to assist medical practitioners perform procedures such as biopsies, or provide a novel anatomical education tool, by projecting anatomy and other relevant information from the operating room directly onto a patient’s skin. This approach is not currently used widely in hospitals but has the benefit of providing effective procedure guidance without the practitioner having to look away from the patient. Developmental work towards the alpha-phase of IPoP is presented including tracking methods for tools such as biopsy needles, patient tracking, image registration and problems encountered with the multi-mirror effect.
[Abstract] [PDF] [doi:10.2312/vcbm.20161270] [BibTeX]

J. C. Roberts, C. Headleand, D. Perkins, and P. D. Ritsos, “Personal Visualisation for Learning,” in Personal Visualization: Exploring Data in Everyday Life Workshop, IEEE Conference on Visualization, Chicago, IL, USA, 2015. Learners have personal data, such as grades, feedback and statistics on how they fair or compare with the class. But, data focusing on their personal learning is lacking, as it does not get updated regularly (being updated at the end of a taught session) and the displayed information is generally a single grade. Consequently, it is difficult for students to use this information to adapt their behavior, and help them on their learning journey. Yet, there is a rich set of data that could be captured and help students learn better. What is required is dynamically, regularly updated personal data, that is displayed to students in a timely way. Such ‘personal data’ can be presented to the student through ‘personal visualizations’ that engender ‘personal learning’. In this paper we discuss our journey into developing learning systems and our resulting experience with learners. We present a vision, to integrate new technologies and visualization solutions, in order to encourage and develop personal learning that employs the visualization of personal learning data.
[Abstract] [PDF] [URL] [BibTeX]

C. H. Headleand, L. ap Cenydd, L. Priday, P. D. Ritsos, J. C. Roberts, and W. Teahan, “Anthropomorphisation of Software Agents as a Persuasive Tool,” in Understanding Persuasion: HCI as a Medium for Persuasion Workshop, British HCI, 2015. In this position paper, we make an argument for the anthropomorphism of software agents as a persuasive tool. We begin by discussing some of the relevant applications, before providing a brief introduction to the CASA theory of social interaction with computers. We conclude by describing a selection of the evidence for anthropomorphism, and an argument for further research into this area.
[Abstract] [PDF] [URL] [BibTeX]

P. D. Ritsos, J. W. Mearman, A. Vande Moere, and J. C. Roberts, “Sewn with Ariadne’s Thread - Visualizations for Wearable & Ubiquitous Computing,” in Death of the Desktop Workshop, IEEE Conference on Visualization, Paris, France, 2014. Lance felt a buzz on his wrist, as Alicia, his wearable, informed him via the bone-conduction ear-piece - ‘You have received an email from Dr Jones about the workshop’. His wristwatch displayed an unread email glyph icon. Lance tapped it and listened to the voice of Dr Jones, talking about the latest experiment. At the same time he scanned through the email attachments, projected in front of his eyes, through his contact lenses. One of the files had a dataset of a carbon femtotube structure
[Abstract] [PDF] [URL] [BibTeX]

J. C. Roberts, J. W. Mearman, and P. D. Ritsos, “The desktop is dead, long live the desktop! – Towards a multisensory desktop for visualization,” in Death of the Desktop Workshop, IEEE Conference on Visualization, Paris, France, 2014. “Le roi est mort, vive le roi!”; or “The King is dead, long live the King” was a phrase originally used for the French throne of Charles VII in 1422, upon the death of his father Charles VI. To stave civil unrest the governing figures wanted perpetuation of the monarchs. Likewise, while the desktop as-we-know-it is dead (the use of the WIMP interface is becoming obsolete in visualization) it is being superseded by a new type of desktop environment: a multisensory visualization space. This space is still a personal workspace, it’s just a new kind of desk environment. Our vision is that data visualization will become more multisensory, integrating and demanding all our senses (sight, touch, audible, taste, smell etc.), to both manipulate and perceive the underlying data and information.
[Abstract] [PDF] [URL] [BibTeX]

P. D. Ritsos, A. T. Wilson, H. C. Miles, L. F. Williams, B. Tiddeman, F. Labrosse, S. Griffiths, B. Edwards, K. Möller, R. Karl, and J. C. Roberts, “Community-driven Generation of 3D and Augmented Web Content for Archaeology,” in Eurographics Workshop on Graphics and Cultural Heritage - Short Papers and Posters, Darmstadt, Germany, 2014, pp. 25–28. Heritage sites (such as prehistoric burial cairns and standing stones) are prolific in Europe; although there is a wish to scan each of these sites, it would be time-consuming to achieve. Citizen science approaches enable us to involve the public to perform a metric survey by capturing images. In this paper, discussing work-in progress, we present our automatic process that takes the user’s uploaded photographs, converts them into 3D models and displays them in two presentation platforms – in a web gallery application, using X3D/X3DOM, and in mobile augmented reality, using awe.js
[Abstract] [PDF] [doi:10.2312/gch.20141321] [BibTeX]

P. D. Ritsos and J. C. Roberts, “Towards more Visual Analytics in Learning Analytics,” in EuroVis Workshop on Visual Analytics 2014, Swansea, UK, 2014, pp. 61–65. Learning Analytics is the collection, management and analysis of students’ learning. It is used to enable teachers to understand how their students are progressing and for learners to ascertain how well they are performing. Often the data is displayed through dashboards. However, there is a huge opportunity to include more comprehensive and interactive visualizations that provide visual depictions and analysis throughout the lifetime of the learner, monitoring their progress from novices to experts. We therefore encourage researchers to take a comprehensive approach and re-think how visual analytics can be applied to the learning environment, and develop more interactive and exploratory interfaces for the learner and teacher.
[Abstract] [PDF] [URL] [BibTeX]

Posters

P. D. Ritsos, J. Jackson, and J. C. Roberts, “Web-based Immersive Analytics in Handheld Augmented Reality,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2017), Phoenix, Arizona, USA, 2017. The recent popularity of virtual reality (VR), and the emergence of a number of affordable VR interfaces, have prompted researchers and developers to explore new, immersive ways to visualize data. This has resulted in a new research thrust, known as Immersive Analytics (IA). However, in IA little attention has been given to the paradigms of augmented/mixed reality (AR/MR), where computer-generated and physical objects co-exist. In this work, we explore the use of contemporary web-based technologies for the creation of immersive visualizations for handheld AR, combining D3.js with the open standards-based Argon AR framework and A-frame/WebVR. We argue in favor of using emerging standards-based web technologies as they work well with contemporary visualization tools, that are purposefully built for data binding and manipulation.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

P. W. S. Putcher, J. C. Roberts, and P. D. Ritsos, “Immersive Analytics with WebVR and Google Cardboard,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. We present our initial investigation of a low-cost, web-based virtual reality platform for immersive analytics, using a Google Cardboard, with a view of extending to other similar platforms such as Samsung’s Gear VR. Our prototype uses standards-based emerging frameworks, such as WebVR and explores some the challenges faced by developers in building effective and informative immersive 3D visualizations, particularly those that attempt to resemble recent physical visualizations built in the community.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

J. C. Roberts, J. Jackson, C. Headleand, and P. D. Ritsos, “Creating Explanatory Visualizations of Algorithms for Active Learning,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. Visualizations have been used to explain algorithms to learners, in order to help them understand complex processes. These ‘explanatory visualizations’ can help learners understand computer algorithms and data-structures. But most are created by an educator and merely watched by the learner. In this paper, we explain how we get learners to plan and develop their own explanatory visualizations of algorithms. By actively developing their own visualizations learners gain a deeper insight of the algorithms that they are explaining. These depictions can also help other learners understand the algorithm.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

C. J. Headleand, T. Day, S. R. Pop, P. D. Ritsos, and N. W. John, “Challenges and Technologies for Low Cost Wheelchair Simulation,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2015. The use of electric wheelchairs is inherently risky, as collisions due to lack of control can result in injury for the user, but also potentially for other pedestrians. Introducing new users to powered chairs via virtual reality (VR) provides one possible solution, as it eliminates the risks inherent to the real world during training. However, traditionally simulator technology has been too expensive to make VR a financially viable solution. Also, current simulators lack the natural interaction possible in the real world, limiting their operational value. We present the early stages of a VR, electric wheelchair simulator built using low-cost, consumer level gaming hardware. The simulator makes use use of the the Leap Motion, to provide a level of interaction with the virtual world which has not previously been demonstrated in wheelchair training simulators. Furthermore, the Occulous Rift provides an immersive experience suitable for our training application
[Abstract] [PDF] [doi:10.2312/vcbm.20151225] [Poster] [BibTeX]

C. C. Gray, J. C. Roberts, and P. D. Ritsos, “Where Can I Go From Here? Drawing Contextual Navigation Maps of the London Underground,” in Posters presented at the IEEE Conference on Visualization (IEEE VIS 2015), Chicago, IL, USA, 2015. Network administrators often wish to ascertain where network attackers are located; therefore it would be useful to display the network map from the context of either the attacker’s potential location or the attacked host. As part of a bigger project we are investigating how to best visualize contextual network data. We use a dataset of station adjacencies with journey times as edge weights, to explore which visualization design is most suitable, and also ascertain the best network shortest-path metric. This short paper presents our initial findings, and a visualization for Contextual Navigation using circular, centered-phylogram projections of the network. Our visualizations are interactive allowing users to explore different scenarios and observe relative distances in the data.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

P. D. Ritsos, M. R. Edwards, I. S. Shergill, and N. W. John, “A Haptics-enabled Simulator for Transperineal Ultrasound-Guided Biopsy,” in Eurographics Workshop on Visual Computing for Biology and Medicine, 2015. We present the development of a transperineal prostate biopsy, with high fidelity haptic feedback. We describe our current prototype, which is using physical props and a Geomagic Touch. In addition, we discuss a method for collecting in vitro axial needle forces, for programming haptic feedback, along with implemented an forthcoming features such as a display of 2D ultrasonic images for targeting, biopsy needle bending, prostate bleeding and calcification. Our ultimate goal is to provide an affordable high-fidelity simulation by integrating contemporary off-the-shelf technology components.
[Abstract] [PDF] [doi:10.2312/vcbm.20151229] [Poster] [BibTeX]

J. C. Roberts, R. T. Walker, L. Roberts, R. S. Laramee, and P. D. Ritsos, “Exploratory Visualization through Copy, Cut and Paste,” in Posters presented at the IEEE Conference on Visualization (VIS), November 9-14, Paris, France, 2014. Our goal is to help oceanographers to visualize and navigate their data over several runs. We have been using parallel coordinate plots to display every data value. Through our copy, cut, paste interactions we aim to enable users to drill-down into specific data points and to explore the datasets in a more expressive way. The method allows users to manipulate the PCP on a ZUI canvas, take copies of the current PCP and paste different subset views.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

R. L. S. F. George, P. D. Ritsos, and J. C. Roberts, “Interactive Oceanographic Visualization using spatially-aggregated Parallel Coordinate Plots,” in Posters presented at EuroVis 2014, June 9-13 , Swansea, Wales, UK, 2014. Visual Analytics interfaces allow ocean scientists to interactively investigate and compare different runs and parameterizations. However, oceanographic models are complex, temporal and the datasets that are generated are huge. Parallel Coordinate Plots can help explore multivariate data such as ocean-science data. Common issues with traditional PCPs of clutter and performance inhibit interactive spatial exploration. We describe techniques that aggregates the PCP based on the spatial nature of the data and we render the polylines as ranges.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

P. D. Ritsos, S. A. Panëels, P. J. Rodgers, and J. C. Roberts, “Towards a Formalized Process for Creating Haptic Data Visualizations,” in Posters presented at the IEEE Conference on Visualization (VIS), October 15-18, Atlanta, Georgia, USA, 2013. Haptic Data Visualization (HDV) is a novel application of haptics. It provides functionality by which users touch and feel data, making it a useful tool for users with vision impairments. However, creating such visualizations usually requires programming knowledge, that support workers and tutors of blind users may not possess. To address this issue we propose a formalized process for creating HDVs using the HITPROTO [5] toolkit, which requires no programming experience. We further illustrate this process using an example HDV.
[Abstract] [PDF] [Video] [URL] [Poster] [BibTeX]

Books

J. C. Roberts, C. J. Headleand, and P. D. Ritsos, Five Design-Sheets: Creative Design and Sketching for Computing and Visualisation. Springer, 2017.
[Abstract] [ISBN:978-3319556260] [BibTeX]

Book Chapters

P. D. Ritsos, “Mixed Reality - A paradigm for perceiving synthetic spaces,” in Real Virtuality, M. Reiche and U. Gehmann, Eds. Transcript-Verlag Bielefeld, 2014, pp. 283–310.
[Abstract] [ISBN:978-3-8376-2608-7] [BibTeX]

S. Braun, C. Slater, R. Gittins, P. D. Ritsos, and J. C. Roberts, “Interpreting in Virtual Reality: designing and developing a 3D virtual world to prepare interpreters and their clients for professional practice,” in New Prospects and Perspectives for Educating Language Mediators, D. Kiraly, S. Hansen-Schirra, and K. Maksymski, Eds. Tuebingen : Gunter Narr, 2013, pp. 93–120. This paper reports on the conceptual design and development of an avatar-based 3D virtual environment in which trainee interpreters and their potential clients (e.g. students and professionals from the fields of law, business, tourism, medicine) can explore and simulate professional interpreting practice. The focus is on business and community interpreting and hence the short consecutive and liaison interpreting modes. The environment is a product of the European collaborate project IVY (Interpreting in Virtual Reality). The paper begins with a state-of-the-art overview of the current uses of ICT in interpreter training (section 2), with a view to showing how the IVY environment has evolved out of existing knowledge of these uses, before exploring how virtual worlds are already being used for pedagogical purposes in fields related to interpreting (section 3). Section 4 then shows how existing knowledge about learning in virtual worlds has fed into the conceptual design of the IVY environment and introduces that environment, its working modes and customised digital content. This is followed by an analysis of the initial evaluation feedback on the first environment prototype (section 5), a discussion of the main pedagogical implications (section 6) and concluding remarks (section 7). The more technical aspects of the IVY environment are described in Ritsos et al. (2012).
[Abstract] [ISBN:978-3-8233-6819-9] [BibTeX]

Other Publications

P. D. Ritsos, N. W. John, and J. C. Roberts, “Standards in Augmented Reality: Towards Prototyping Haptic Medical AR,” in 8th International AR Standards Meeting, 2013. Augmented Reality technology has been used in medical visualization applications in various different ways. Haptics, on the other hand, are a popular method of interacting in Augmented and Virtual Reality environments. We present how reliance on standards benefits the fusion of these technologies, through a series of research themes, carried out in Bangor University, UK (and international partners), as well as within the activities domain of the Research Institute of Visual Computing (RIVIC), UK.
[Abstract] [PDF] [BibTeX]

P. D. Ritsos, D. P. Ritsos, and A. S. Gougoulis, “Standards for Augmented Reality: a User Experience perspective,” in 2nd International AR Standards Meeting, 2011. An important aspect of designing and implementing Augmented Reality (AR) applications and services, often disregarded for the sake of simplicity and speed, is the evaluation of such systems, particularly from non-expert users, in real operating conditions. We are strong advocates of the fact that in order to develop successful and highly immersive AR systems, that can be adopted in day-today scenarios, user assessment and feedback is of paramount importance. Consequently, we also feel that an important fragment of future AR Standardisation should focus on User eXperience (UX) aspects, such as the sense of presence, ergonomics, health and safety, overall usability and product identification. Our paper attempts an examination of these aspects and proposes an adaptive theoretical evaluation framework than can be standardised across the span of AR applications.
[Abstract] [PDF] [BibTeX]

Tutorials

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Half-day Tutorial on Sketching Visualization designs, and using the Five Design-Sheet (FdS) Methodology in Teaching,” in Tutorials of at the IEEE Conference on Visualization (IEEE VIS 2017), Phoenix, AZ, USA, 2017. This tutorial leads attendees through sketching designs following the Five Design-Sheet methodology (FdS) and discusses how it can be used in teaching. The first part (before the break) will introduce the FdS, place it in context with other methods, discuss creative thinking and different problem types, explain the benefit of sketching designs, and provide a worked example of the FdS. The second part (after the break) focuses on using the FdS in teaching in Higher Education We give examples of students’ work, and discuss issues and challenges of using sketching for designing and prototyping in teaching, followed by a question and answer session.
[Abstract] [PDF] [BibTeX]

J. C. Roberts, C. Headleand, and P. D. Ritsos, “Sketching Designs for Data-Visualization using the Five Design-Sheet Methodology,” in Tutorials of at the IEEE Conference on Visualization (IEEE VIS 2016), Baltimore, MD, USA, 2016. The tutorial will be useful for anyone who has to create visualization interfaces, and needs to think through different potential ways to display their data. At the end of the tutorial participants will understand techniques to help them be more structured in their ideation. They will be able to sketch interface designs using the Five Design Sheet methodology (FdS). While we know that some developers have started to use the Five Design-Sheet methodology, but this tutorial will start from the beginning and be suitable for any attendee. More information and resources are found on http://fds.design.
[Abstract] [PDF] [BibTeX]

Theses

P.D. Ritsos, Architectures for Untethered Augmented Reality Using Wearable Computers, Ph.D. dissertation, Dept. Elect systems Engineering, University of Essex, 2006

 

Talks & Presentations

  • "Visualization Beyond the Desktop - the next big thing", IEEE Conference on Visualization (VIS 2015), Invited CG&A papers, Chicago, Illinois, USA, Oct. 2015
  • "Visualization Beyond the Desktop - the next big thing", Research Seminar, University of Chester, Chester, UK, Oct. 2015
  • "Sewn with Ariadne’s Thread – Visualizations for Wearable & Ubiquitous Computing", Death of the Desktop Workshop, IEEE Conference on Visualization (VIS 2014), Paris, Nov. 2014
  • "Towards more Visual Analytics in Learning Analytics", Fifth EuroVis Workshop on Visual Analytics (EuroVA), Eurographics Association, Swansea, UK, Jun. 2014
  • "Evaluating Interpreting in Virtual Reality", New Computer Technologies - Animation and Games Workshop, Bangor University, UK, May 2014
  • "Excitement of VisWeek 2013", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Nov. 2013
  • "Haptic Data Visualization”,Visualization and Medical Graphics Group Seminars, Bangor University, UK, Oct. 2013
  • "WeARable Computing - From the Qing Dynasty to Project Glass: Prototypes, Myths, Confusion and Lots of Wires...", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Mar. 2013
  • "Project IVY – Interpreting in Virtual Reality", IVY Dissemination Symposium: Exploiting Emerging Technologies to Prepare Interpreters and their Clients for Professional Practice, Kia Oval, London, UK, Nov. 2012
  • "Project IVY – Interpreting in Virtual Reality", Virtual Learning Technologies 2012, Bangor University, UK, Oct. 2012
  • "Interpreting in Virtual Reality", Virtual Worlds Education Forum, Staffordshire University, UK, Mar. 2012
  • "Project IVY – Interpreting in Virtual Reality – Virtual Environment Development", Creating Second Lives 2011: Blurring Boundaries, Bangor University, UK, Sep. 2011
  • "Project IVY – Interpreting in Virtual Reality – Virtual Environment Development", Visualization and Medical Graphics Group Seminars, Bangor University, UK, Sep. 2011

Vocational Seminar

  • P.D. Ritsos, M. Drakos, C. Vasilatos and N. Fountas, "ActionStreamer System Administrator Training", Training Seminars, Intracom-Telecom S.A., Athens, Greece, Jul. 2010
  • P.D. Ritsos, "Hellas OnLine (HOL) Mediation System Administrator Training", Training Seminars, Hellas OnLine S.A., Athens, Greece, Jul. 2010

[Go to Top]