Spatiotemporal Visuotactile Interaction Ju-Hwan Lee and Charles Spence Crossmodal Research Laboratory, University of Oxford, OX1 3UD, UK {juhwan.lee,charles.spence}@psy.ox.ac.uk http://www.psy.ox.ac.uk/xmodal
Abstract. Over the last few years, a growing number of IT devices have started to incorporate touch-screen technology in order to create more effective multimodal user interfaces. The use of such technology has opened up the possibility of presenting different kinds of tactile feedback (i.e., active vs. passive) to users. Here, we report 2 experiments designed to investigate the spatiotemporal constraints on the multisensory interaction between vision and touch as they relate to a user’s active vs. passive interaction with a touch screen device. Our results demonstrate that when touch is active, tactile perception is less influenced by irrelevant visual stimulation than when passively touching the screen. Our results also show that vision has to lead touch by approximately 40ms in order for optimal simultaneity to be perceived, no matter whether touch is active or passive. These findings provide constraints for the future design of enhanced multimodal interfaces. Keywords: Multimodal User Interface, Visuotactile interaction.
1
Introduction
Many IT devices, such as mobile phones, PDAs, and MP3 players, have recently started to adopt touch screen technology in order to enhance the user interface (and experience). Several studies have investigated the effect of vibrotactile feedback in response to touch inputs delivered via small screen devices such as mobile phones and PDAs [1,2]. Meanwhile, other studies have investigated the possibility of using tactile feedback to present more sophisticated information to interface users [3,4]. Given the lack of prior research investigating the properties of tactile information processing when users are given such vibrotactile feedback, the two psychophysical experiments reported here were designed to compare visuotactile interactions involving a touch screen device following both active and passive touch. We examined the spatial (Experiment 1) and temporal (Experiment 2) limitations on the integration of visual and tactile information presented via a touch screen [5,6,7,8]. First, we examined whether spatially-irrelevant visual distractors would influence users’ judgments concerning the direction from which a conducted vibrotactile stimulation had been presented (left vs. right), as a function of whether the touch was active or passive (Experiment 1). We also investigated the temporal limits on the integration of visual and tactile stimuli, M. Ferre (Ed.): EuroHaptics 2008, LNCS 5024, pp. 826–831, 2008. c Springer-Verlag Berlin Heidelberg 2008
Spatiotemporal Visuotactile Interaction
827
using a temporal order judgment (TOJ) task (Experiment 2). The aim here was to determine the range of temporal asynchronies over which a typical user would perceive visual and tactile stimuli as being synchronous (i.e., as belonging to the same multisensory event). Our results provide useful information regarding the spatial and temporal constraints on visuotactile information processing when tactile feedback is presented under conditions of both active and passive touch. These findings provide guidelines to help constrain the design of more effective tactile feedback in touch screen devices in the years to come.
2
Experiment 1: Visuotactil Spatial Interactions
2.1
Methods
9 na¨ıve participants (M=26 years) took part in this 20 minute study which was conducted in a dark room. An Immersion 8.4” LCD touch screen monitor was placed on a table 40cm from the participant. Tactile stimuli were presented via 4 vibrators (AEC TACTAID VBW32) situated at the corners of the screen (Fig. 1A). The target stimuli consisted of 50ms 50Hz square waves presented from the two vibrators on one or other side. White noise (70 dB) was presented continuously over headphones to mask any equipment sounds. The visual stimulus consisted of a 50ms white flicker (diameter: 12mm) presented 73mm to one or other side of fixation on the black background. Depending on the experimental condition, the tactile and visual stimuli were either presented from the same side, from different sides, or else the tactile target was presented in isolation. There were two within-participants factors: Touch Type (Active vs. Passive) and Spatial Congruency (Congruent, Incongruent, or Baseline No Distractor). The congruent, incongruent, and touch-only baseline trials were each presented randomly 40 times in each of 2 blocks of 120 trials. Participants had to try and discriminate whether the left or right tactors were activated on the screen, while
Fig. 1. The experimental set-ups used in Experiment 1 (A), Experiment 2 (B)
828
J.-H. Lee and C. Spence
trying to ignore any simultaneously-presented, but task-irrelevant, visual stimulus. They were also instructed to maintain their fixation on their right index fingertip. In the active touch block, the stimuli were only presented when the participants touched their index finger to the fixation point. They then removed their index finger from the fixation point. In the passive touch block, the participants continuously pressed their index finger against the fixation point. The participants responded by pressing 1 of 2 keys with their left hand in order to indicate whether the tactile target vibrations had been presented on the left or right. The participants were informed that the visual stimuli were just as likely to be presented from the same as from the opposite side as the tactile stimuli. 2.2
Results and Discussion
The participants’ accuracy data (Fig. 2A) were analyzed using a 2 (Touch Type) x 3 (Spatial Congruency) ANOVA. The analysis revealed a significant main effect of spatial congruency [F (2,16)=26.29, p