Date & Time : September 11 02:00 pm - 03:45 pm Location : HS1 Chair : Steven Feiner, Columbia University Papers :
Grasp-Shell vs Gesture-Speech: A comparison of direct and indirect natural interaction techniques in Augmented Reality
Authors: Thammathip Piumsomboon, David Altimira, Hyungon Kim, Adrian Clark, Gun Lee, Mark Billinghurst
Abstract : In order for natural interaction in Augmented Reality (AR) to become widely
adopted, the techniques used need to be shown to support precise interaction,
and the gestures used proven to be easy to understand and perform . Recent
research has explored free-hand gesture interaction with AR interfaces, but
there have been few formal evaluations conducted with such systems. In this
paper we introduce and evaluate two natural interaction techniques: the
free-hand gesture based Grasp-Shell, which provides direct physical
manipulation of virtual content; and the multi-modal Gesture-Speech, which
combines speech and gesture for indirect natural interaction. These
techniques support object selection, 6 degree of freedom movement, uniform
scaling, as well as physics-based interaction such as pushing and flinging.
We conducted a study evaluating and comparing Grasp-Shell and Gesture-Speech
for fundamental manipulation tasks. The results show that Grasp-Shell
outperforms Gesture-Speech in both efficiency and user preference for
translation and rotation tasks, while Gesture-Speech is better for uniform
scaling. They could be good complementary interaction methods in a
physics-enabled AR environment, as this combination potentially provides both
control and interactivity in one interface. We conclude by discussing
implications and future directions of this research.
Improving Co-presence with Augmented Visual Communication Cues for Sharing Experience through Video Conference
Authors: Seungwon Kim, Gun Lee, Nobuchika SAKATA, Mark Billinghurst
Abstract : Video conferencing is becoming more widely used in areas other than
face-to-face conversation, such as sharing real world experience with remote
friends or family. In this paper we explore how adding augmented visual
communication cues can improve the experience of sharing remote task space
and collaborating together. We developed a prototype system that allows users
to share live video view of their task space taken on a Head Mounted Display
(HMD) or Handheld Display (HHD), and communicate through not only voice but
also using augmented pointer or annotations drawn on the shared view. To
explore the effect of having such an interface for remote collaboration, we
conducted a user study comparing three video-conferencing conditions with
different combination of communication cues: (1) voice only, (2) voice +
pointer, and (3) voice + annotation. The participants used our remote
collaboration system to share a parallel experience of puzzle solving in the
user study, and we found that adding augmented visual cues significantly
improved the sense of being together. The pointer was the most preferred
additional cue by users for parallel experience, and there were different
states of the users’ behavior found in remote collaboration.
A Study of Depth Perception in Hand-Held Augmented Reality using Autostereoscopic Displays
Authors: Matthias Berning, Daniel Kleinert, Till Riedel, Michael Beigl
Abstract : Displaying three-dimensional content on a flat display is bound to reduce the
impression of depth, particularly for mobile video see-trough augmented
reality. Several applications in this domain can benefit from accurate depth
perception, especially if there are contradictory depth cues, like occlusion
in a x-ray visualization. The use of stereoscopy for this effect is already
prevalent in head-mounted displays, but there is little research on the
applicability for hand-held augmented reality. We have implemented such a
prototype using an off-the-shelf smartphone equipped with a stereo camera and
an autostereoscopic display. We designed and conducted an extensive user
study to explore the effects of stereoscopic hand-held augmented reality on
depth perception. The results show that in this scenario depth judgment is
mostly influenced by monoscopic depth cues, but our system can improve
positioning accuracy in challenging scenes.
Measurements of Live Actor Motion in Mixed Reality Interaction
Authors: Gregory Hough, Ian Williams, Cham Athwal
Abstract : This paper presents a method for measuring the magnitude and impact of errors
in mixed reality interactions. We define the errors as measurements of hand
placement accuracy and consistency within bimanual movement of an interactive
virtual object. First, a study is presented which illustrates the amount of
variability between the hands and the mean distance of the hands from the
surfaces of a common virtual object. The results allow a discussion of the
most significant factors which should be considered in the frame of
developing realistic mixed reality interaction systems. The degree of error
was found to be independent of interaction speed, whilst the size of virtual
object and the position of the hands are significant. Second, a further study
illustrates how perceptible these errors are to a third person viewer of the
interaction (e.g. an audience member). We found that interaction errors
arising from the overestimation of an object surface affected the visual
credibility for the viewer considerably more than an underestimation of the
object. This work is presented within the application of a real-time
Interactive Virtual Television Studio, which offers convincing real-time
interaction for live TV production. We believe the results and methodology
presented here could also be applied for designing, implementing and
assessing interaction quality in many other Mixed Reality applications.