IEEE ISMAR 2014 Workshop on
Tracking Methods & Applications

Description

The focus of this workshop is on all issues related to tracking for mixed and augmented reality applications.

Unlike the tracking sessions of the main conference, this workshop does not require pure novelty of the proposed methods; it rather encourages presentations that concentrate on complete systems and integrated approaches engineered to run in real-world scenarios.

This workshop will be held on Monday, September 8, 2014, in conjunction with the 2014 International Symposium on Mixed and Augmented Reality (ISMAR), September 10-12, in Munich, Germany.

Video recordings of the workshop are now available on YouTube.

Workshop Program

08:45-09:00Welcome message
09:00-10:00Keynote I

Resource-efficient tracking methods for mobile navigation on embedded systems
Darius Burschka

Visual navigation is an essential component to uniquely identify the position of a camera in a local environment. While recent active 3D reconstruction methods (Kinect, etc.) enabled multiple localization approaches directly in the three-dimensional space, many applications especially on mobile systems and in ubiquitous wearable devices require a very fast pose estimation in 100Hz range with very limited computational resources. A typical application can be a correct capture of, e.g., human gestures. Passive systems that rely on tracking of changes in the images have a large advantage over active systems like Kinect as being applicable for indoor and outdoor applications in arbitrary ranges of operation.

In my talk, I will present our current work on efficient monocular image-based localization and 3D reconstruction with an emphasis on their applicability on low-power embedded systems. This work lead to improvements in existing tracking techniques, like Kanade-Lucas-Tracker but also to development of novel keypoint detectors like AGAST by my group. I will present our current work on analysis of the motion fields for a robust motion estimation and collision detection for mobile systems. I will show our applications of the developed tracking methods in the areas of flying systems, automotive applications and human-computer interaction.

10:00-11:00Technical papers

Markerless Camera Tracking for Complex Structures such as Plant Facilities
Tsuneya Kurihara and Hirohiko Sagawa

A Depth-based Polar Coordinate System for People Segmentation and Tracking with Multiple RGB-D Sensors
Emilio J. Almazán and Graeme A. Jones

Instruct, Monitor and Reinforce: An Augmented and Virtual Reality System for Training Autistic Children
Lakshmiprabha N. S., Alexandre Santos, Dimitar Mladenov and Olga Beltramello

11:00-11:15Coffee break
11:15-12:30Invited talks

Augmenting the world with Intel RealSense
Gila Kamhi, Intel, USA

Metaio Software Suite & Tracking Update
Maximilian Kruschwitz, metaio GmbH, Germany

Volkswagen Tracking Challenge team presentations:

  • Louis Fong, Infocomm Research, Singapore
  • Harald Wuest, Fraunhofer IGD, Germany
  • Steve Bourgeois, CEA-List, France
12:30-13:30Lunch break
13:30-14:30Keynote II

Beyond Features: Dense and Direct Methods for Visual SLAM and Geometric Reconstruction
Daniel Cremers

The reconstruction of the 3D world from a moving camera is among the central challenges in computer vision. While traditional approaches have been focused on computing correspondence and 3D structure for a sparse set of feature points, more recent approaches aim at directly computing dense geometric surfaces using all available image data. In my talk, I will present some recent developments on convex formulations for dense reconstruction from multiple images or multiple videos. Furthermore, I will present real-time capable direct methods for reconstructing the world from handheld color or RGB-D cameras. Applications include free-viewpoint television and augmented reality.

This is joint work with Kalin Kolev, Martin Oswald, Jakob Engel, Jan Stühmer, Christian Kerl, Frank Steinbrücker and Jürgen Sturm.

14:30-15:30Invited talks


Harald Wuest, Fraunhofer IGD, Germany

Computer Vision and Augmented Reality Research at Qualcomm
Gerhard Reitmayr, Qualcomm Research Vienna, Austria

15:30-15:45Coffee break
15:45-16:15Break-out session
16:15-16:45Reports from break-out session
16:45-17:00Closing words
17:00-18:00Demo session

Keynote Speakers

Daniel Cremers, TU Munich

Daniel Cremers received Bachelor degrees in Mathematics (1994) and Physics (1994), and a Master's degree in Theoretical Physics (1997) from the University of Heidelberg. In 2002 he obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at the University of California at Los Angeles (UCLA) and one year as a permanent researcher at Siemens Corporate Research in Princeton, NJ. From 2005 until 2009 he was associate professor at the University of Bonn, Germany. Since 2009 he holds the chair for Computer Vision and Pattern Recognition at the Technical University, Munich. His publications received several awards, including the 'Best Paper of the Year 2003' (Int. Pattern Recognition Society), the 'Olympus Award 2004' (German Soc. for Pattern Recognition) and the '2005 UCLA Chancellor's Award for Postdoctoral Research'. Professor Cremers is associate editor for several journals including the International Journal of Computer Vision, the IEEE Transactions on Pattern Analysis and Machine Intelligence and the SIAM Journal of Imaging Sciences. He has served as area chair (associate editor) for ICCV, ECCV, CVPR, ACCV, IROS, etc. He serves as program chair for ACCV 2014. In 2009, he received an ERC Starting Grant. In December 2010 he was listed among “Germany's top 40 researchers below 40” (Capital). Prof. Cremers is Managing Director of the Department of Computer Science at TU Munich.

Darius Burschka, TU Munich

Darius Burschka received his PhD degree in Electrical and Computer Engineering in 1998 from the Technische Universität München in the field of vision-based navigation and map generation with binocular stereo systems. In 1999, he was a Postdoctoral Associate at Yale University in New Haven, Connecticut, where he worked on laser-based map generation and landmark selection from video images for vision-based navigation systems. From 1999 to 2003, he was an Associate Research Scientist at the Johns Hopkins University, Baltimore, Maryland. Later 2003 to 2005, he was an Assistant Research Professor in Computer Science at the Johns Hopkins University. Currently, he is an Associate Professor in Computer Science at the Technische Universität München, Germany, where he heads the computer vision and perception group. He was an area coordinator in the DFG Cluster of Excellence ``Cognition in Technical Systems''. Since 2005 he heads a virtual institute for "Telerobotics and Sensor Data Fusion" between the German Aerospace Agency (DLR) and the Technische Universität München. He is a co-chair of the IEEE RAS Technical Committee on Computer and Robot Vision.

His areas of research are sensor systems for mobile and medical robots and human computer interfaces. The focus of his research is on vision-based navigation and three-dimensional reconstruction from sensor data. Dr. Burschka is a senior member of IEEE.

Call for Papers

Download the official call for papers here.

The concept of this workshop is to look at pose tracking from an end-to-end point of view. The research fields covered include self-localization using computer vision or other sensing modalities (such as depth cameras, GPS, inertial, etc.) and tracking systems issues (such as system design, calibration, estimation, fusion, etc.). This year’s focus is also expanded to research on object tracking and semantic scene understanding with relevance to augmented reality. Implementations on mobile devices and under real-time constraints are also part of the workshop focus. These are issues of core importance for practical augmented reality systems.

We invite submissions describing practical tracking solutions for AR which address issues including, but not limited to:

  • Model-based detection and tracking
  • Image-based localization
  • Large scale object detection and recognition
  • SLAM and online reconstruction
  • Sensor integration and fusion
  • Tracking system design
  • Calibration
  • User generated content and tracking data
  • Networking and persistence
  • Flow graphs and plug-in architectures
  • Optimizations and real-time implementations for mobile and embedded hardware
  • Tracking with depth sensors or other alternative modalities
  • Semantic scene parsing and scene understanding for tracking

Authors are invited to submit original, unpublished manuscripts in standard IEEE proceedings format.

Please submit your manuscript in PDF format please enable javascript to view. We will accept submissions of either technical papers or position statements, with a maxmimum length of six pages. Accepted papers must be presented by one of the authors at the workshop.

Organizers and Program Committee

  • Jonathan Ventura, University of Colorado, Colorado Springs, USA
  • Daniel Wagner, Qualcomm, Austria
  • Daniel Kurz, metaio, Germany
  • Harald Wuest, Fraunhofer IGD, Germany
  • Selim Benhimane, Intel, USA