Touch3D: Touchscreen Interaction on Multiscopic 3D with Electrovibration Haptics
We present Touch3D, an interactive mobile platform that provides realistic viewing and touching experiences through glasses-free 3D visualization with electrovibration. Touch3D is designed to take advantage of both visual and tactile illusions to maximize multimodal experience in touchscreen interaction. We seamlessly integrate two technologies: Automultiscopic 3D Display and Electrovibration Display; and weave both hardware and software into one fluid interface. Our museum application using Touch3D demonstrates important implications for the improvement of 3D perception in both visual and tactile modalities for enhanced touchscreen interaction. [Paper]
- Elaborate integration of automultiscopic display and electrovibration display
- Integration of very thin PETG lenticular film (insulator) onto electrostatic display to maintain strong electrostatic force while providing optimal viewing distance and motion parallax
- Our lenticular lens film consists of multiple lens array and the film divides the underlying image into four different views (0.4 mm of its thickness with optimal viewing distance of 400 mm; lenslet pitch of 280 um with its maximum arc width of 60 um; slated by 14.6 degree)
- Unity3D plugin libraries for automultiscopic visualization and haptic feedback
- Realtime and interactive multiview rendering with tactile sensation
Automultiscopic visualization consists of rendering images at multiple viewpoints and merging them into one single image to be displayed on the LCD panel. To minimize image distortion among the views, we set up the virtual camera rig in the off-axis manner. The lenticular lenslet array refracts light rays from underlying LCD pixels to directions based on their spatial locations. By merging the view-dependent images based on the directional distributions of pixels, the final image is generated in consistent with our lenticular setup. To keep the interactive framerate for touch interaction, we implemented this merging process within a single call of a multi-texture shader in the GPU.
Our tactile rendering based on visual saliency information reflects the spatial data observed from a pool of users who have interacted with the 3D object. Based on the results that shows the density and spatial distribution of their touch tendency on 3D object, we found that multiple users are interested in high frequency features of both the geometry and color texture, as well as broken parts of the relic. We extracted the curvatures and geometry features from 3D object using Sobel filter and further divided the textures into several salient segments to be mapped with different haptic feedback signals.