Science fiction movies have often imagined a future where we interact with digital displays by grabbing, spinning, and sliding virtual elements with our hands. Considering how natural and intuitive this type of interface would be, it is a wonder that no practical implementations have been developed yet. If you are waiting for a user interface like the ones depicted in Minority Report or Iron Man, you are going to have to keep waiting.
Researchers at the University of Maryland and Aarhus University are working to bring us closer to that future, however. Focusing initially on multi-display data visualization systems, they have developed a novel interface that they call Datamancer. It enables users to point at the display they want to work with, then perform gestures to interact with its applications. In this way, Datamancer could give a big productivity boost to those working in data visualization, where complex graphics and charts need to be continually tweaked to gain insights.
The system’s hardware (đź“·: B. Patnaik et al.)
Unlike most previous gesture-based interfaces, which require large, fixed installations or virtual reality setups, Datamancer is a fully mobile, wearable device. It consists of two main sensors: a finger-mounted pinhole camera and a chest-mounted gesture sensor, both connected to a Raspberry Pi 5 computer worn at the waist. Together, these components allow users to control and manipulate visualizations spread across a room full of displays — such as laptops, tablets, and large TVs — without needing to touch them or use a mouse.
To initiate an interaction, the user points at a screen using the finger-mounted ring camera and presses a button. This activates a fiducial marker detection system that identifies each display using dynamic ArUco markers. Once a display is in focus, the user can use a set of bimanual gestures to zoom, pan, drag, and drop visual content. For example, making a fist with the right hand pans the visualization, while a fist with the left hand zooms in or out. A pinch gesture with the right hand places content, and the same gesture with the left removes it.
The star of the gesture recognition system is a Leap Motion Controller 2, a high-precision optical tracker mounted on the user’s chest. It offers continuous tracking of both hands, with a range of up to 110 centimeters and a 160-degree field of view. The ring-mounted camera, an Adafruit Ultra Tiny GC0307, detects fiducial markers from up to 7 meters away.
A fully-equipped user (đź“·: B. Patnaik et al.)
The system’s computing tasks are handled by a Raspberry Pi 5, equipped with a 2.4 GHz quad-core Cortex-A76 processor and 8 GB of RAM. It is cooled by an active fan and powered by a 26,800 mAh Anker power bank, providing more than 10 hours of runtime. All the hardware is mounted on a vest-style harness, designed for comfort and quick setup, taking about a minute to put on.
In testing, Datamancer has been used in real-world application scenarios, including a transportation management center where analysts collaborate in front of multiple screens. Expert reviews and a user study confirmed its potential to support more natural and flexible data analysis workflows.
While the system is still in development and not yet ready for mass adoption, Datamancer is a promising step toward the kind of intuitive, spatial interaction that has so far only existed in fiction.