Volumetric interfaces and displays have been an area of substantial focus in human–computer interaction (HCI), bringing about new ways to enhance interactivity, learning, and understanding as an extension to existing modalities. While much research in the area exists, current haptic display projects tend to be difficult to implement as part of existing visual display technologies.
1. Introduction
Volumetric interfaces and displays have been an area of substantial focus in human–computer interaction (HCI), bringing about new ways to enhance interactivity, learning, and understanding as an extension to existing modalities. While much research in the area exists, current haptic display projects tend to be difficult to implement as part of existing visual display technologies. Moreover, visual imagery is often an afterthought when designing haptic displays. When implemented, it often relies on techniques not suited for portability, such as overhead projection. This limits the places and situations in which the current state of volumetric haptic displays can be used. Existing commercially volumetric haptic technologies available on the market tend to separate the visual (stereoscopic) component from the volumetric haptic component (afferent flow), which often consists of a nearby display or is in conjunction with a VR/AR headset display. While this may work well in a controlled environment with fixed equipment to provide an enhanced experience, these products are not designed to be used or integrated into the more common devices and displays we use daily.
2. Volumetric Haptic Displays
A good example of this is the inFORCE shape display prototype proof of concept implemented by Ken Nakagaki and his team at the MIT Media Lab
[1][8]. It is a shape display that uses the bed of pins design, a mechanical arrangement proven to work via many other implementations
[2][3][4][5][6][9,10,11,12,13]. Nagaki’s prototype is complemented with projected visuals onto the bed of pins, which are raised upwards proportionally to the top surface of the simulated 3D object that can be explored by hand. While the implementation of the bed of pins display is quite effective when simulating volume, it is held back by its often cumbersome design requirements related to the physical size of the pins themselves alongside accompanying actuator technology
[7][14] as well as the current inability to integrate with modern display technologies.
Ultrasound has also been well studied as a method of rendering volumetric shapes in mid-air. In one of the most notable examples, Benjamin Long and his team at the University of Bristol have successfully presented a method for creating haptic shapes in mid-air
[8][15]. This method has been further developed into the UltraHaptics system used by other researchers in the field to further explore its possible use and implementation in use cases where volumetric haptics could be helpful
[9][10][11][12][13][14][16,17,18,19,20,21]. The advantage is that this allows the rendering of feelable shapes within a wide space in mid-air. Once again, there is a limitation to the display technologies that can be incorporated. Due to the arrangement and placement of the ultrasonic actuators, there is no reflection surface or visual plane on which an image can be projected, nor can a display be placed over the actuators as it would impede the ultrasonic output. While the implementation of mid-air haptics through the use of a combined ultrasonic array can create explorable and feelable objects, the low intensity of the haptic points does not provide the adjustable physical response one could expect from a repulsion force of a virtual surface (>0.5 N), thus not allowing the feeling of volumetric feedback
[15][16][22,23]. Seki Inoue and his team implement a similar combination array of ultrasonic actuators within an orthogonal cavity to create feelable haptic objects
[17][24] with similarly documented effects and conclusions.
The 3D Tractus and TouchMover
[18][19][20][25,26,27] are earlier examples of devices for interaction and exploration of 3D data that use a wide single-axis range of movement to successfully emulate the feeling of volume on a laptop and a large touchscreen display. The cited works show that by actuating a single axis of depth combined with stereoscopic 3D visual cues, the volume of virtual shapes can be successfully conveyed to users of said devices without the need for additional use of handheld peripherals. The limitation of a single axis brings into question the degree to which similar devices could be immersive if they could actuate similarly with six degrees of freedom.
As scholars explore volumetric haptic surfaces on a flat display, it is with deep interest that scholars look at the work of Dongbum Pyo’s team on dielectric elastomers
[21][28]. Dielectrics enable an array of actuators within a transparent thin film that can be placed in the top layer of a display stack. While a loss of brightness does exist, it has been shown to convey depth and texture successfully. The thin 500 µm implementation means that this type of display suits modern display technologies. The limitations come from the limited displacement at an amplitude of around 12 µm. While this is suitable to suggest changes in shape or volume, the volume characteristics are somewhat limited
[15][22].
Other techniques that aim to emulate volumetric feedback come in the form of wearable haptic displays. For example, the work of Adel et al. introduces a finger splint with an attached magnet where a volumetric feelable and explorable space is designed above an array of powerful electromagnets
[22][29], similar to the array of pins methods. Still, through wearable technology, a force can be precisely adjusted when applied to the fingertip using a controllable magnetic field. As there is no need for direct physical feedback via the grasp of the fingertip, such an implementation opens up new possibilities such that it is possible to provide volumetric haptic feedback in regular displays when the magnetic field could be placed behind it.
Yet, as scholars look at the shift from a base device to wearable gloves, scholars see several emerging technologies from the private sector that use similar finger restrainment systems
[23][24][25][26][27][28][30,31,32,33,34,35] where a glove provides a resistive force that does not allow movement of the user’s fingers. It can be useful when the need is to simulate holding an object. These techniques are often limited in simulating the ability to push an object or to simulate activities such as typing, an activity that does not require a user to hold the keys, but rather press them down. It is also, for the moment, demonstrably limited to VR and AR environments. There are attempts to incorporate haptic feedback via holographic displays
[29][36], but these displays require additional equipment that may not integrate with today’s devices. An interesting hand-worn device by Trinitova et al.
[30][37] allows application of pressure to the palm, improving the sensation of weight as expected when interacting with an object. These principles of repulsive and attractive forces found in the mentioned products and research are essential to consider while attempting to simulate mechanical feedback.
The use of air as an interaction medium continues to be of interest due to the possibility of implementation without requiring the user to wear cumbersome equipment such as haptic gloves or fingertip caps
[31][32][38,39]. For this reason, scholars continue to see implementations that use this medium better. Christou et al. implement a spatial haptics system dubbed aerohaptics that uses an air blower on a servo configuration that allows it blowing along an XYZ offset
[33][40]. It shows the continued interest in the field to create spatial volumetric haptics that can be felt without the need to wear additional hardware on their person.
In general, scholars see many novel ways to produce volumetric feedback, including using unmanned drones to simulate objects in virtual space or repurposing robotic arms for similar purposes
[34][41]. It should be noted that kinesthetic devices (e.g., Novint’s Falcon Haptic Device) available on the market can accurately generate vector torque transferred to the user’s fingertips through a manipulandum or an attached display
[35][36][42,43]. While many systems have been proposed and continue to exist, there is yet to be a low-cost implementation that can complement standard consumer devices such as phones, tablets, and other touch-sensitive surfaces equipped with onscreen keyboards or naked-eye stereoscopic display technologies.