Abstract : In Augmented Reality (AR), visible misregistration can be caused by many
inherent error sources, such as errors in tracking, calibration, and
modeling. In this paper we present a novel pixel-wise closed-loop
registration framework that can automatically detect and correct registration
errors using a reference model comprised of the real scene model and the
desired virtual augmentations. Registration errors are corrected in both
global world space via camera pose refinement, and local screen space via
pixel-wise corrections, resulting in spatially accurate and visually coherent
registration. Specifically we present a registration-enforcing model-based
tracking approach that weights important image regions while refining the
camera pose estimates (from any conventional tracking method) to achieve
better registration, even in the case of modeling errors. To deal with
remaining errors, which can be rigid or non-rigid, we compute the optical
flow between the camera image and the real model image rendered with the
refined pose, enabling direct screen-space pixel-wise corrections to
misregistration. The estimated flow field can be applied to improve
registration in two distinct ways: (1) forward warping of modeled
on-real-object-surface augmentations (e.g., object re-texturing) into the
camera image, leading to surface details that are not present in the virtual
object; and (2) backward warping of the camera image into the real scene
model, preserving the full use of the dense geometry buffer (depth in
particular) provided by the combined real-virtual model for registration,
leading to pixel accurate real-virtual occlusion. We discuss the trade-offs
between, and different use cases of, forward and backward warping with
model-based tracking in terms of specific properties for registration. We
demonstrate the efficacy of our approach with both simulated and real data.
Semi-Dense Visual Odometry for AR on a Smartphone
Authors: Thomas Schöps, Jakob Engel, Daniel Cremers
Abstract : We present a direct monocular visual odometry system which runs in real-time
on a smartphone. Being a direct method, it tracks and maps on the images
themselves instead of extracted features such as keypoints. New images are
tracked using direct image alignment, while geometry is represented in the
form of a semi-dense depth map. Depth is estimated by filtering over many
small-baseline, pixel-wise stereo comparisons. This leads to significantly
less outliers and allows to map and use all image regions with sufficient
gradient, including edges. We show how a simple world model for AR
applications can be derived from semi-dense depth maps, and demonstrate the
practical applicability in the context of an AR application in which
simulated objects can collide with real geometry.
Sticky Projections - A New Approach to Interactive Shader Lamp Tracking
Authors: Christoph Resch, Peter Keitler, Gudrun Klinker
Abstract : Shader lamps can augment physical objects with projected virtual replications
using a camera-projector system, provided that the physical and virtual
object are well registered. Precise registration and tracking has been a
cumbersome and intrusive process in the past. In this paper, we present a new
method for tracking arbitrarily shaped physical objects interactively. In
contrast to previous approaches our system is mobile and makes solely use of
the projection of the virtual replication to track the physical object and
"stick" the projection to it. Our method consists of two stages, a fast pose
initialization based on structured light patterns and a non-intrusive
frame-by-frame tracking based on features detected in the projection. In the
initialization phase a dense point cloud of the physical object is
reconstructed and precisely matched to the virtual model to perfectly overlay
the projection. During the tracking phase, a radiometrically corrected
virtual camera view based on the current pose prediction is rendered and
compared to the captured image. Matched features are triangulated providing a
sparse set of surface points that is robustly aligned to the virtual model.
The alignment transformation serves as an input for the new pose prediction.
Quantitative experiments show that our approach can robustly track complex
objects at interactive rates.
Dense Planar SLAM
Authors: Renato Salas-Moreno, Ben Glocker, Paul Kelly, Andrew Davison
Abstract : Using higher-level entities during mapping has the potential to improve
camera localisation performance and give substantial perception capabilities
to real-time 3D SLAM systems. We present an efficient new real-time approach
which densely maps an environment using bounded planes and surfels extracted
from depth images (like those produced by RGB-D sensors or dense multi-view
stereo reconstruction). Our method offers the every-pixel descriptive power
of the latest dense SLAM approaches, but takes advantage directly of the
planarity of many parts of real-world scenes via a data-driven process to
directly regularize planar regions and represent their accurate extent
efficiently using an occupancy approach with on-line compression. Large areas
can be mapped efficiently and with useful semantic planar structure which
enables intuitive and useful AR applications such as using any wall or other
planar surface in a scene to display a user's content.
Real-time Deformation, Registration and Tracking of Solids Based on Physical Simulation
Authors: Ibai Leizea, Hugo Álvarez, Iker Aguinaga, Diego Borro
Abstract : This paper proposes a novel approach to registering deformations of 3D
non-rigid objects for Augmented Reality applications. Our prototype is able
to handle different types of objects in real-time regardless of their
geometry and appearance (with and without texture) with the support of an
RGB-D camera. During an automatic offline stage, the model is processed in
order to extract the data that serves as input for a physics-based
simulation. Using its output, the deformations of the model are estimated by
considering the simulated behaviour as a constraint. Furthermore, our
framework incorporates a tracking method based on templates in order to
detect the object in the scene and continuously update the camera pose
without any user intervention. Therefore, it is a complete solution that
extends from tracking to deformation formulation for either textured or
untextured objects regardless of their geometrical shape. Our proposal
focuses on providing a correct visual with a low computational cost.
Experiments with real and synthetic data demonstrate the visual accuracy and
the performance of our approach.