News and events archive
From the faculty
Informatics colloquium 26. 4. Visual Localization in Natural Environments for Computational . . .Informatics Colloquium 26. 4. 2016, 14:00 posluchárna D2
doc. Ing. Martin Čadík, Ph.D., FIT VUT
Visual Localization in Natural Environments for Computational Photography Applications
Abstract: With the advent of smart cell phones and hand-held devices equipped with integrated cameras, today virtually everyone is a photographer. Every day, we are taking photographs in larger quantities and often of higher technical qualities than ever before. We share photos, edit them, search them, archive them, enhance them, capture them for some specific purpose, or we simply want to make our shots look nice. Current digital cameras do not only capture light, they in fact compute pictures. There is practically no image that would not be computationally processed to some extent today. Nevertheless, images taken by amateur photographers often lack the qualities of professional photos and some image enhancement and/or editing is necessary. Computational photography evolved from computer graphics, image processing, computer vision, and optics to solve these issues and to extend the capabilities of current photography and display technology in general.
In this talk, I will introduce the field of computational photography, its elementary principles and applications (e.g. extending limited dynamic range, widening field of view, or extending depth of field). Then, I will focus on our recent work on automatic visual image geo-localization. Our aim is to accurately find the location and orientation of the camera which captured the image. We introduce a system for automatic alignment of the query photo with a geo-referenced 3D terrain model. We propose a new alignment metric to accurately predict the camera orientation succeeding the large-scale visual localization step. Having a sufficiently accurate match between a photograph and a 3D model offers new possibilities for image enhancement. It can be used to transform photographs into a realistic virtual 3D experience, e.g. to automatically highlight elements in the image, such as the travel path taken, names of mountains, or other landmarks. Furthermore, the synthetic depth map, and/or the whole 3D model of the queried photo, can be used for novel view synthesis, image relighting, dehazing or refocusing.