Dr. Andrew J. Davison
The New Scientist article Augmented-reality machine works in real time, saidComputer-generated scenery can be realistically added to live video footage, using a machine vision system developed at Oxford University, UK.
Researchers Andrew Davison and Ian Reid say the augmented-reality system could also in the longer term enable robots to navigate more effectively. Or it could be used to virtually decorate a real house or plan engineering work. It allows a computer to build an accurate three dimensional model of the world using only a video camera feed. It can also keep track of the camera’s movement within its environment – all in real time.
Previously, it has been necessary to calibrate a computer using several markers added to a scene. The Oxford team’s machine only requires an object of known size to be placed in its line of sight to perform a complete calibration.
Dr. Andrew J. Davison is
Lecturer and EPSRC Advanced Research Fellow at the
Department of Computing, Imperial College London
and a member of the
Visual Information Processing (VIP) Research Group.
He developed one of the very first real-time SLAM (Simultaneous Localization and Mapping) systems for
robot navigation, and the first using vision as its primary sensor
(ECCV 1998).
Andrew authored
Active Search for Real-Time Vision,
Modelling the World in Real-Time: How Robots Engineer
Information,
and
Mobile Robot Navigation
Using
Active Vision, and
coauthored
Real-Time 3D SLAM for a Humanoid Robot considering Pattern Generator
Information,
Simultaneous Stereoscope Localization and Soft-Tissue Mapping for
Minimally Invasive Surgery, and
Real-Time Monocular SLAM with Straight Lines.
Read the
full list of his publications!
Andrew earned a BA degree in physics at the University of Oxford (First
Class Honors) in 1994 and earned a Ph.D. in
Mobile Robot Navigation using Active Vision at the University of Oxford
in 1998.
Watch this
video (70.5 MB avi) of his system in action. Its ability to rapidly
and
accurately model
its environment is demonstrated using virtual furniture, including a
table and shelves, which are transposed over the live video footage.
Download
SceneLib 1.0, his open-source C++ library for SLAM
(Simultaneous Localization and Mapping)
with a modular
approach to specification of the details of robot and
sensor types. It also has specialized components to permit
real-time vision-based SLAM with a single camera (MonoSLAM) and the
design is optimized towards this type of application.