RegisterGet new password
ISMAR 2014 - Sep 10-12 - Munich, Germany

ieee.org
computer.org
vgtc.org

ISMAR Sessions

Call for Participation

Committees

Supporters and Media

Contact




Social Media

LinkedInLinkedIn



Previous Years

2014 ISMAR Posters

All posters and demonstrations will be presented in a "One Minute Madness" session on Wednesday, 11:30 AM - 1:00 PM in the main conference hall (HS1). Afterwards they will be on display throughout the entire conference, as individually arranged by the authors. Specical, guaranteed presentation times are arranged in the following three batches:

Posters Presentation - Batch 1
Date & Time :  Wednesday, September 10 03:15 pm - 03:45 pm
Location : Magistrale
Posters : 
The Posture Angle Threshold between Airplane and Window Frame Metaphors
Authors:
Marcus Tönnis, Sandro Weber, Gudrun Klinker
Description:
Integrating different mental models and the corresponding interaction techniques for navigation and object manipulation tasks is a topic of concern for providing user interfaces for the wide variety of potential users. With a user controlled, egocentric hand-held device we aim at integrating the so far identified three major interaction techniques based on the metaphors of a steering wheel, a toy airplane and a window frame. Here, specifically the toy airplane and the window frame contradict each other, leaving the issue at which postural angle of the hand-held device the mental model of the users changes.
View management for webized mobile AR contents
Authors:
Jungbin Kim, Joohyun Lee, Byounghyun Yoo, Sangchul Ahn, Heedong Ko
Description:
Information presentation techniques are important in augmented reality applications as they are in the traditional desktop user interface (WIMP) and web user interface. This paper introduces view management for a web-based augmented reality mobile platform. We use a webized mobile AR browser called Insight that provides separation of the application logic, including the tracking engine and AR content, so that the view management logic and contents are easy to reuse. In addition, the view management is able to accommodate in-situ context of an AR application.
MOBIL: A Moments based Local Binary Descriptor
Authors:
Abdelkader BELLARBI, Samir Otmane, Nadia ZENATI-HENDA, Samir BENBELKACEM
Description:
In this paper, we propose an efficient, and fast binary descriptor, called MOBIL (MOments based BInary differences for Local description), which compares not just the intensity, but also sub-regions geometric proprieties by employing moments. This approach offers high distinctiveness against affine transformations and appearance changes. The experimental evaluation shows that MOBIL achieves a quite good performance in term of low computation complexity and high recognition rate compared to state-of-the-art real-time local descriptors.
A Preliminary Study on Altering Surface Softness Perception using Augmented Color and Deformation
Authors:
Parinya Punpongsanon, Daisuke Iwai, Kosuke Sato
Description:
Choosing the appropriate soft/hard material is important for designing a product such as sofa or bed, but typically limited by the number of physical materials that the designer owns. Pseudo-haptic feedback is an alternative way that enables designer to roughly simulate material properties (e.g., softness, hardness) by only generating the visual illusion. However, the current technique is limited within video see-through augmented reality, in which the user interact in a real space while looking at a virtual space. This paper explores the possibility to realize pseudo-haptic feedback for touching objects in spatial augmented reality. We investigate and compare effects of visually superimposing a projection graphics onto the surface of a touched object and the fingernail/finger for changing the surface tactile perception. The potential of our method is discussed through a preliminary user study.
Combining Multi-touch and Device Movement in Mobile Augmented Reality Manipulations.
Authors:
Asier Marzo, Benoît Bossavit, Martin Hachet
Description:
Three input modalities for manipulation techniques in Mobile Augmented Reality have been compared. The first one employs only multi-touch input. The second modality uses the movements of the device. Finally, the third one is a hybrid approach based on a combination of the two previous modalities. A user evaluation (N=12) on a 6 DOF docking task suggests that combining multi-touch input and device movement offers the best results in terms of task completion time and efficiency. Nonetheless, using solely the device is more intuitive and performs worse only in large rotations. Given that mobile devices are increasingly supporting movement tracking, the presented results encourage the addition of device movement as an input modality.
Touch Gestures for Improved 3D Object Manipulation in Mobile Augmented Reality
Authors:
Philipp Tiefenbacher, Andreas Pflaum, Gerhard Rigoll
Description:
This work presents three techniques for 3D manipulation on mobile touch devices, taking the specifics of mobile AR scenes into account. We compare the common direct manipulation technique with two indirect techniques, which utilize only the thumbs to perform the transformations. The evaluation of the manipulation variants is conducted in a mixed reality (MR) environment which takes advantage of the controlled conditions of a full virtual reality (VR) system. A study with 18 participants shows that the two-thumb method tops the other techniques. It performs better with respect to the total manipulation time and total number of gestures.
Towards Mobile Augmented Reality for the Elderly
Authors:
Daniel Kurz, Anton Fedosov, Stefan Diewald, Jörg Güttler, Barbara Geilhof, Matthias Heuberger
Description:
Mobility and independence are key aspects for self-determined living in today's world and demographic change presents the challenge to retain these aspects for the aging population. Augmented Reality (AR) user interfaces might support the elderly, for example, when navigating as pedestrians or by explaining how devices and mobility aids work and how they are maintained. This poster reports on the results of practical field tests with elderly subjects testing handheld AR applications. The main finding is that common handheld AR user interfaces are not suited for the elderly because they require the user to hold up the device so the back-facing camera captures the object or environment related to which digital information shall be presented. Tablet computers are too heavy and they do not provide sufficient grip to hold them over a long period of time. One possible alternative is using head-mounted displays (HMD). We present the promising results of a user test evaluating whether elderly people can deal with AR interfaces on a lightweight HMD. We conclude with an outlook to improved handheld AR user interfaces that do not require continuously holding up the device, which we hope are better suited for the elderly.
A Mobile Augmented Reality System to Assist Auto Mechanics
Authors:
Darko Stanimirovic, Nina Damasky, Sabine Webel, Dirk Koriath, Andrea Spillner, Daniel Kurz
Description:
Ground-breaking technologies and innovative design of upcoming vehicles introduce complex maintenance procedures for auto mechanics. In order to present these procedures in an intuitive manner, the Mobile Augmented Reality Technical Assistance (MARTA) project was initiated. The goal was to create an Augmented Reality-aided application running on a tablet computer, which shows maintenance instructions superimposed on a live video feed of the car. Robust image-based tracking of specular surfaces using both edge and texture features as well as the software framework are the most important aspects of the project, which are presented here. The resulting application is deployed and used productively to support maintenance of the Volkswagen XL1 vehicle across the world.
Indirect Augmented Reality Considering Real-World Illumination Change
Authors:
Fumio Okura, Takayuki Akaguma, Tomokazu Sato, Naokazu Yokoya
Description:
Indirect augmented reality (IAR) utilizes pre-captured omnidirectional images and offline superimposition of virtual objects for achieving high-quality geometric and photometric registration. Meanwhile, IAR causes inconsistency between the real world and the pre-captured image. This paper describes the first-ever study focusing on the temporal inconsistency issue in IAR. We propose a novel IAR system which reflects real-world illumination change by selecting an appropriate image from a set of images pre-captured under various illumination. Results of a public experiment show that the proposed system can improve the realism in IAR.
Non-Parametric Camera-Based Calibration of Optical See-Through Glasses for Augmented Reality Applications
Authors:
Martin Klemm, Harald Hoppe, Fabian Seebacher
Description:
This work describes a camera-based method for the calibration of optical See-Through Glasses (STGs). A new calibration technique is introduced for calibrating every single display pixel of the STGs in order to overcome the disadvantages of a parametric model. A non-parametric model compared to the parametric one has the advantage that it can also map arbitrary distortions. The new generation of STGs using waveguide-based displays will have higher arbitrary distortions due to the characteristics of their optics. First tests show better accuracies than in previous works. By using cameras which are placed behind the displays of the STGs, no error prone user interaction is necessary. It is shown that a high accuracy tracking device is not necessary for a good calibration. A camera mounted rigidly on the STGs is used to find the relations between the system components. Furthermore, this work elaborates on the necessity of a second subsequent calibration step which adapts the STGs to a specific user. First tests prove the theory that this subsequent step is necessary.
Visual-Inertial 6-DOF Localization for a Wearable Immersive VR/AR System
Authors:
Ludovico Carozza, Frederic Bosche', Mohamed Abdel-Wahab
Description:
We present a real-time visual-inertial localization approach directly integrable in a wearable immersive system for simulation and training. In this context, while CAVE systems typically require complex and expensive set-up, our approach relies on visual and inertial information provided by consumer monocular camera and Inertial Measurement Unit, embedded in a wearable stereoscopic HMD. 6-DOF localization is achieved through image registration with respect to a 3D map of descriptors of the training room and robust tracking of visual features. We propose a novel efficient and robust pipeline based on state-of-the-art image-based localization and sensor fusion approaches, which makes use of robust orientation information from IMU, to cope with camera fast motion and limit motion jitters. The proposed system runs at 30 fps on a standard PC and requires very limited set-up for its intended application.
Exploring Social Augmentation Concepts for Public Speaking using Peripheral Feedback and Real-Time Behavior Analysis
Authors:
Ionut Damian, Chiew Seng Sean Tan, Tobias Baur, Johannes Schöning, Kris Luyten, Elisabeth André
Description:
Non-verbal and unconscious behavior plays an important role for efficient human-to-human communication but are often undervalued when training people to become better communicators. This is particularly true for public speakers who need not only behave according to a social etiquette but do so while generating enthusiasm and interest for dozens if not hundreds of other persons. In this paper we propose the concept of social augmentation using wearable computing with the goal of giving users the ability to continuously monitor their performance as a communicator. To this end we explore interaction modalities and feedback mechanisms which would lend themselves to this task.
Posters Presentation - Batch 2
Date & Time :  Thursday, September 11 01:00 pm - 01:30 pm
Location : Magistrale
Posters : 
Smartwatch-Aided Handheld Augmented Reality
Authors:
Darko Stanimirovic, Daniel Kurz
Description:
We propose a novel method for interaction of humans with real objects in their surrounding combining Visual Search and Augmented Reality (AR). This method is based on utilizing a smartwatch tethered to a smartphone, and it is designed to provide a more user-friendly experience compared to approaches based only on a handheld device, such as a smartphone or a tablet computer. The smartwatch has a built-in camera, which enables scanning objects without the need to take the smartphone out of the pocket. An image captured by the watch is sent wirelessly to the phone that performs Visual Search and subsequently informs the smartwatch whether digital information related to the object is available or not. We thereby distinguish between three cases. If no information is available or the object recognition failed, the user is notified accordingly. If there is digital information available that can be presented using the smartwatch display and/or audio output, it is presented there. The third case is that the recognized object has digital information related to it, which would be beneficial to see in an Augmented Reality view spatially registered with the object in real-time. Then the smartwatch informs the user that this option exists and encourages using the smartphone to experience the Augmented Reality view. Thereby, the user only needs to take the phone out of the pocket in case Augmented Reality content is available, and when the content is of interest for the user.
Visualization of Solar Radiation Data in Augmented Reality
Authors:
Maria Beatriz Carmo, Ana Paula Cláudio, António Ferreira, Ana Paula Afonso, Paula Redweik, Cristina Catita, Miguel Centeno Brito, José Nunes Pedrosa
Description:
We present an AR application for visualizing solar radiation data on facades of buildings, generated from LiDAR data and climatic observations. Data can be visualized using colored surfaces and glyphs. A user study revealed the proposed AR visualizations were easy to use, which can lead to leverage the potential benefits of AR visualizations: to detect errors in the simulated data, to give support to the installation of photovoltaic equipment and to raise public awareness of the use of facades for power production.
Motion Detection based Ghosted Views for Occlusion Handling in Augmented Reality
Authors:
Arthur Padilha, Veronica Teichrieb
Description:
This work presents an improvement to the scene analysis pipeline of a visualization technique called Ghosting. Computer vision and image processing techniques are used to extract natural features, from each video frame. These features will guide the assignment of transparency to pixels, in order to give the ghosting effect, while blending the virtual object into the real scene. Video sequences were obtained from traditional RGB cameras. The main contribution of this work is the inclusion of a motion detection technique to the scene feature analysis step. This procedure leads to a better perception of the augmented scene because the proper ghosting effect is achieved when a moving natural salient object, that catches users attention, passes in front of an augmented one.
Ongoing development of a user-centered, AR testbed in industry
Authors:
Luca Bertuccelli, Taimoor Khawaja, Paul O'Neill, Bruce Walker
Description:
User experience assessment of new augmented reality (AR) technology is an increasingly important area of research, including industrial applications. In our domain, many services field technicians have traditionally relied on stand alone tools with restricted connectivity and information visualization capabilities for on-site diagnostics and maintenance. With new hand-held and wearable AR technology and recent developments in cloud-based computing, new services can be delivered so that they are more interactive and more connected, with a goal to ultimately improve the efficiency and productivity of the technician. It is fundamental for acceptance that this technology enables a high quality of user experience, and a user-centered design framework is necessary for the testing and evaluation of these new technologies. This paper presents a testbed that we are building at United Technologies Research Center that leverages user-centered design framework for developing and deploying AR applications both for hand held devices as well as wearable AR glasses. We present two test cases from our testbed: (a) a hand-held based AR application for active diagnostics in building HVAC systems; (b) an interactive AR application for aircraft engine maintenance based on wearable see-through AR glasses.
HMD Video See Though AR with Unfixed Cameras Vergence
Authors:
Vincenzo Ferrari, Fabrizio Cutolo, Emanuele Maria Calabrò, Mauro Ferrari
Description:
Stereoscopic video see though AR systems permit accurate marker video based registration. To guarantee accurate registration, cameras are normally rigidly blocked while the user could require changing their vergence. We propose a solution working with lightweight hardware that, without the need for a new calibration of the cameras relative pose after each vergence adjustment, guarantees registration accuracy using pre-determined calibration data.
Local Optimization for Natural Feature Tracking Targets
Authors:
Elias Tappeiner, Dieter Schmalstieg, Tobias Langlotz
Description:
In this work, we present an approach for optimizing targets for natural feature-based pose tracking such as used in Augmented Reality applications. Our contribution is an approach for locally optimizing a given tracking target instead of applying global optimizations, such as proposed in the literature. The local optimization together with visualized trackability rating leads to a tool to create high quality tracking targets.
Contact-view: A Magic-lens Paradigm Designed to Solve the Dual-view Problem
Authors:
Klen Čopič Pucihar, Paul Coulton
Description:
Typically handheld AR systems utilize a single back-facing camera and the screen in order to implement device transparency. This creates the dual-view problem a consequence of virtual transparency which does not match true transparency—what the user would see looking through a transparent glass pane. The dual-view problem affects usability of handheld AR systems and is commonly addressed though user-perspective rendering solutions. Whilst such approach produces promising results, the complexity of implementing user-perspective rendering and the fact it does not solve all sources that produce the dual-view problem, means it only ever addresses part of the problem. This paper seeks to create a more complete solution for the dual-view problem that will be applicable to readily available handheld-device. We pursue this goal by designing, implementing and evaluating a novel interaction paradigm we call ‘contact-view’. By utilizing the back and front-facing camera and the environment base-plane texture—predefined or incrementally created on the fly, we enable placing the device directly on top of the base-plane. As long as the position of the phone in relation to the base-plane is known, appropriate segment of the occluded base-plane can be rendered on the device screen, result of which is transparency in which dual-view problem is eliminated.
Turbidity-based Aerial Perspective Rendering for Mixed Reality
Authors:
Carlos Morales, Takeshi Oishi, Katsushi Ikeuchi
Description:
In outdoor Mixed Reality (MR), objects distant from the observer suffer from an effect called aerial perspective that fades the color of the objects and blends it to the environmental light color. The aerial perspective can be modeled using a physics-based approach; however, handling the changing and unpredictable environmental illumination is demanding. We present a turbidity-based method for rendering a virtual object with aerial perspective effect in a MR application. The proposed method first estimates the turbidity by matching luminance distributions of sky models and a captured omnidirectional sky image. Then the obtained turbidity is used to render the virtual object with aerial perspective.
Augmentation of Live Excavation Work for Subsurface Utilities Engineering
Authors:
Stéphane Côté, Ian Létourneau, Jade Marcoux-Ouellet
Description:
The virtual excavation is a well-known augmentation technique that was proposed for city road environments. It can be used for planning excavation work by augmenting the road surface with a virtual excavation revealing subsurface utility pipes. In this paper, we proposed an extension of the virtual excavation technique for live augmentation of excavation work sites. Our miniaturized setup, consisting of a sandbox and a Kinect device, was used to simulate dynamic terrain topography capture. We hypothesized that the virtual excavation could be used live on the ground being excavated, which could facilitate the excavator operator’s work. Our results show that the technique can indeed be adapted to dynamic terrain topography, but turns out to occlude terrain in a potentially hazardous way. Potential solutions include the use of virtual paint markings instead of a virtual excavation.
QR Code Alteration for Augmented Reality Interactions
Authors:
Han Park, Taegyu Kim, Jun Park
Description:
QR code, for its recognition robustness and data capacity, has been often used for Augmented Reality applications as well as for other commercial applications. However, it is difficult to enable tangible interactions through which users may change 3D models or animations. It is because QR codes are automatically generated by the rules, and are not easily modifiable. Our goal was to enable QR code based Augmented Reality interactions. By analysis and through experiments, we discovered that some parts of a QR code can be altered to change the text string that the QR code represents. In this paper, we introduced a prototype for QR code based Augmented Reality interactions, which allows for Rubik’s cube style rolling interactions.
Augmented Reality Binoculars on the Move
Authors:
Taragay Oskiper, Mikhail Sizintsev, Vlad Branzoi, Supun Samarasekera, Rakesh Kumar
Description:
In this paper, we expand our previous work on augmented reality (AR) binoculars to support wider range of user motion - up to thousand square meters compared to only a few square meters as before. We present our latest improvements and additions to our pose estimation pipeline and demonstrate stable registration of objects on the real world scenery while the binoculars are undergoing significant amount of parallax-inducing translation.
Using Augmented Reality to Support Information Exchange of Teams in the Security Domain
Authors:
Dragos Datcu, Marina Cidota, Heide Lukosch, Stephan Lukosch
Description:
For operational units in the security domain that work together in teams it is important to quickly and adequately exchange context-related information. This extended abstract investigates the potential of augmented reality (AR) techniques to facilitate information exchange and situational awareness of teams from the security domain. First, different scenarios from the security domain that have been elicited using an end-user oriented design approach are described. Second, a usability study is briefly presented based on an experiment with experts from operational security units. The results of the study show that the scenarios are well-defined and the AR environment can successfully support information exchange in teams operating in the security domain.
Towards User Perspective Augmented Reality for Public Displays
Authors:
Jens Grubert, Hartmut Seichter, Dieter Schmalstieg
Description:
We work towards ad-hoc augmentation of public displays on handheld devices, supporting user perspective rendering of display content. Our prototype system only requires access to a screencast of the public display, which can be easily provided through common streaming platforms and is otherwise self-contained. Hence, it easily scales to multiple users.
Utilizing Contact-view as an Augmented Reality Authoring Method for Printed Document Annotation
Authors:
Klen Čopič Pucihar, Paul Coulton
Description:
In Augmented Reality (AR) the real world is enhanced by superimposed digital information commonly visualized through augmented annotations. The visualized data comes from many different data sources. One increasingly important source of data is user generated content. Unfortunately, AR tools that support user generated content are not common hence the majority of augmented data within AR applications is not generated utilizing AR technology. In this paper we discuss the main reasons for this and evaluate how the contact-view paradigm could enhance annotation authoring process within the class of tabletop size AR workspaces. This evaluation is based on a prototype that allows musicians to annotate a music score manuscript utilizing freehand drawing on top of device screen. Experimentation showed the potential of contact-view paradigm as an annotation authoring method that performs well in single and collaborative multi-user situations.
Posters Presentation - Batch 3
Date & Time :  Friday, September 12 10:30 am - 11:00 am
Location : Magistrale
Posters : 
Classifications of Augmented Reality Uses in Marketing
Author:
Ana Javornik
Description:
Existing uses of augmented reality (AR) in marketing have not yet been discussed according to the different contexts of consumption and marketing functions they fulfill. This research investigates which uses of AR have emerged so far in marketing and proposes a classification schema for them, based on the intensity of the augmentation and on marketing functions. Such differentiation is needed in order to better understand the dynamics of augmentation of physical surroundings for commercial purposes and consequently to distinguish between different consumer experiences.
A single co-lived augmented world or many solipsistic fantasies?
Author:
Nicola Liberati
Description:
The aim of this paper is to determine the difference, in augmented reality (AR), between the creation of one single world and the creation of multiple worlds in terms of the “reality” of the augmented world created by the device. This work analyses what kind of relation between subjects and worlds are created by these two different kinds of worlds created from a phenomenological perspective. It will be clear how the production of such an augmented world cannot be solved by mere technical elements, but it has to deal with bigger problems.
CI-Spy: Using Mobile-AR for Scaffolding Historical Inquiry Learning
Authors:
Gurjot Singh, Doug Bowman, Todd Ogle, David Hicks, David Cline, Eric Ragan, Aaron Johnson, Rosemary Zlokas
Description:
Learning how to think critically, analyze sources, and develop an evidence-based account is central to learning history. Often, learners struggle to understand inquiry, to apply it in specific scenarios, and to remain engaged while learning it. This paper discusses preliminary design of a mobile AR system that explicitly teaches inquiry strategies for historical sources and engages students to practice in an augmented real-world context while scaffolding their progression. The overarching question guiding our project is how and to what extent AR technologies can be used to support learning of critical inquiry strategies and processes.
AR for the comprehension of linear perspective in the Renaissance masterpiece The Holy Trinity (Masaccio, 1426)
Author:
Giovanni Landi
Description:
This work shows how augmented reality can help the comprehension of the geometric construction of the perspective in the Holy Trinity fresco (Masaccio, 1426) in the church of Santa Maria Novella in Florence,opening new scenarios in the demonstration of the subtle underlying geometric principles.There has been a long debate about the actual geometrical fidelity of the “fake architecture” painted in perspective in the Masaccio’s fresco. Recent studies demonstrated that Masaccio’s perspective is one of the best and commendable applications of the Brunelleschi's lesson about linear perspective and its geometric construction.We here use AR to follow step by step the geometric construction of Masaccio’s perspective.This approach takes advantage of the possibilities that AR offers to explore in realtime the relations between 3d and its 2d representations.
An Augmented and Virtual Reality System for Training Autistic Children
Authors:
Lakshmi Prabha Nattamai Sekar, Alexandre Santos, Dimitar Mladenov, Olga Beltramello
Description:
Autism or Autism Spectrum disorder (ASD) is a pervasive development disorder causing impairment in thinking, feeling, hearing, speaking and social interaction. For this reason, children suffering from autism need to follow special training in order to increase their ability to learn new skills and knowledge. These children have propensity to be attracted by the technology devices especially virtual animations. The interest of this research work is to explore and study the use of Augmented and Virtual Reality (AR/VR) system for training the children with ASD based on Applied Behavior Analysis (ABA) techniques. This system assists in teaching children about new pictures or objects along with the associated keyword or matching sentence in an immersive way with fast interaction. The preliminary prototype demonstrates satisfactory performance of the proposed AR/VR system working in laboratory conditions.
Social Panoramas Using Wearable Computers
Authors:
Carolin Reichherzer, Alaeddin Nassani, Mark Billinghurst
Description:
In this paper we describe the concept of Social Panoramas that combine panorama images, Mixed Reality, and wearable computers to support remote collaboration. We have developed a prototype that allows panorama images to be explored in real time between a Google Glass user and a remote tablet user. This uses a variety of cues for supporting awareness, and enabling pointing and drawing. We conducted a study to explore if these cues can increase Social Presence. The results suggest that increased interaction does not increase Social Presence, but tools with a higher perceived usability show an improved sense of presence.
Device vs. User Perspective Rendering in Google Glass AR Applications
Authors:
João Paulo Lima, Rafael Roberto, João Marcelo Teixeira, Veronica Teichrieb
View Independence in Remote Collaboration Using AR
Authors:
Matthew Tait, Mark Billinghurst
Description:
This poster presents an Augmented Reality (AR) system for remote collaboration that allows a remote user to navigate a local user’s scene, independently from their viewpoint. This is achieved by using a 3D scan and reconstruction of the user’s environment. The remote user can place virtual objects in the scene that the local user views through a head mounted display (HMD), helping them place real objects. A user study tested how the amount of remote view independence affected collaboration. We found that increased view independence led to faster task completion, more user confidence, and a decrease in verbal communication, but object placement accuracy remained unchanged.
A Reconstructive See-Through Display
Author:
Ky Waegel
Description:
The two most common display technologies used in augmented reality head-mounted displays are optical see-through and video see-through. In this paper I demonstrate a third alternative: reconstructive see-through. By using a commodity depth camera to construct a dense 3D model of the world and rendering this to the user, distracting latency and position discrepancies between real and virtual objects can be reduced.
Representing Degradation of Real Objects Using Augmented Reality
Authors:
Takuya Ogawa, Manabe Yoshitsugu, Noriko Yata
Description:
Much research in augmented reality (AR) technology attempts to match the textures of virtual objects with real world. However, the textures of real objects must also be rendered consistent with virtual information. This paper proposes a method for representing the degradation of real objects in virtual time. Real-world depth information, used to build three-dimensional models of real objects, is captured by a RGB-D camera. The degradation of real objects is then represented by superimposing the degradation texture onto the real object.
Interactive Deformation of Real Objects
Authors:
Jungsik Park, Byung-Kuk Seo, Jong-Il Park
Description:
This paper presents a method for interactive deformation of a real object. Our method uses a predefined 3D model of a target object for tracking and deformation. A camera pose relative to the target object is estimated using 3D model-based tracking. Object region in camera image is obtained by projecting the 3D model onto image plane with the estimated camera pose, and a texture map is extracted from the object region and mapped to the 3D model. Then a texture-mapped model is rendered based on a mesh deformed by user via Laplacian operation. Experimental results demonstrate that our method provides user interactions with 3D real objects on real scenes, not augmented virtual contents.
Contextually Panned and Zoomed Augmented Reality Interactions Using COTS Heads Up Displays
Authors:
Alex Hill, Harrison Leach
Description:
Consumer of the shelf heads up displays with onboard cameras and processing power have recently become available. Evaluations of a naive implementation of video-see-through augmented reality suggest that their small display and off-axis camera presents usability problems. We panned and zoomed a composited video feed on the Google Glass device to center the augmented reality context within the display and to give the appearance of a fixed distance to the content. We pilot tested both the panned and zoomed display against a naive implementation and found that users preferred the view-stabilized version.
Interacting with your own hands in a fully immersive MR system
Authors:
Franco Tecchia, Giovanni Avveduto, Marcello Carrozzino, Raffaello Brondi, Massimo Bergamasco, Leila Alem
Description:
This poster introduces a fully immersive Mixed Reality system we have recently developed, where the user is free to walk inside a virtual scenario while wearing a HMD. The novelty of the system lies in the fact that users can see and use their real hands - by means of a Kinect-like camera mounted on the HMD - in order to naturally interact with the virtual objects. Our working hypothesis are that the introduction of the photorealistic capture of users' hands in a coherently rendered virtual scenario induces in them a strong feeling of presence and embodiment without the need of using a synthetic 3D modelled avatar as a representation of the self. We also argue that the users' ability of grasping and manipulating virtual objects using their own hands not only provides an intuitive interaction experience, but also improves self-perception as well as the perception of the environment.
AIBLE: An Inquiry-Based Augmented Reality Environment for teaching astronomical phenomena
Authors:
Stéphanie Fleck, Gilles Simon, Christian Bastien
Description:
We present an inquiry-based augmented reality (AR) learning environment (AIBLE) designed for teaching basic astronomical phenomena in elementary classroom (children of 8-11 years old). The novelty of this environment lies in the combination of both Inquiry Based Sciences Education and didactics principles with AR features. This environment was user tested by 69 pupils in order to assess its impact on learning. The main results indicate that AIBLE provides new opportunities for the identification of learners’ problem solving strategies.

Sponsors (Become one)

supported by



in special cooperation with