Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR

Paper at ACM CHI '20
by Philipp Wacker, Adrian Wagner, Simon Voelker, and Jan Borchers


In Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a `heatmap' technique that colors the objects in the scene based on their distance to the pen.







Software and Data

These are explanations for the supplementary materials provided of the CHI paper: "Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR".
There are two sets of materials provided.


01 Software:
The first is the software used in our study (01 Software). We made small adjustments to the functionality so that you can use it without having to build a "full" ARPen to test the interactions. The basis of the implementation is the ARPen system available here: https://github.com/i10/ARPen.

You can build the application for iOS using Xcode. Print the pdf "aruco-marker" to use as a simple ARPen for testing the visualizations. To use the techniques, select a visualization technique from the menu on the left. The scene will display a number of cubes. Selecting the visualization technique again toggles between solid and wireframe models. Buttons on the top left allow you to toggle the current visualization on or off, and start drawing.


02 Evaluation:
The study recordings and evaluation scripts are in the folder "02 Evaluation". We used Python scripts to perform the bootstrapping and calculation of confidence intervals. The recordings are in the subfolder /raw/:

  • "sonar.csv" contains the distance, time, and help percentage measurements
  • "ratings.csv" contains the subjective ratings of the participants regarding confidence, helpfulness, and comprehensibility of the visualizations.

To perform the evaluation, run the python scripts "createDataTablesAndInitialPlots.py" and "createDataTablesAndInitialPlotsQual.py". These will prepare the data tables and perform bootstrapping to calculate the confidence intervals. The results are stored in a new folder "/DataTables/" with either a "Data" or "CI" prefix to the filename. Based on these data tables, the scripts also prepare initial plots for the results. These are stored in the folder "/Figures/". Note: the scripts require Python3, numpy, pandas, and the ARCH library for bootstrapping (https://pypi.org/project/arch/).




  • Philipp Wacker, Adrian Wagner, Simon Voelker and Jan Borchers. Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR.  In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, pages 719:1–719:11, ACM, New York, NY, USA, April 2020.
    HomepageMoviePDF DocumentBibTeX Entry

Contact: Philipp Wacker


← Main ARPen project page

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.