Contact us at:+90 (216) 681 5141
 

Research

Light field imaging is an emerging research field due to the new capabilities it brings, including post-capture refocusing, aperture control, and 3D modeling. Single-shot single sensor light field cameras try to balance the fundamental trade-off between spatial and angular resolution. The spatial resolution achieved with such cameras is typically far from being satisfactory, limiting the extensive adoption of light field cameras. In this paper, we present a hybrid-sensor light field camera that uses minimal optical components, a regular sensor and a micro-lens array based light field sensor to produce high-spatial resolution light field. The use of a single lens and matching image planes prevent complexities, such as occlusions, that multi lens systems suffer from. In our experiments, we demonstrate that the proposed hybrid-sensor camera leads to improved depth estimation in addition to increase in spatial resolution.

Light field (or plenoptic) imaging has become an attractive research field due to its post-capture capabilities, including refocusing, perspective change and depth estimation. Micro-lens array based cameras that recently emerged have made the light field acquisition process a practical task. In this paper, we propose to convert such a plenoptic camera into a high-dynamic range camera through a minor optical modification. The optical modification is an optical mask placed in front of the main lens to increases the vignetting effect, which darkening towards the borders of the image plane due to loss of light. As a result, different parts of the dynamic range are captured with different sub-aperture images of the light field. These sub-aperture images are then fused through photometric registration and optical flow vectors to produce a high-dynamic range image.

Through capturing spatial and angular radiance distribution, light field cameras introduce new capabilities that are not possible with conventional cameras. So far in the light field imaging literature, the focus has been on the theory and applications of single light field capture. By combining multiple light fields, it is possible to obtain new capabilities and enhancements and even exceed physical limitations, such as spatial resolution and aperture size of the imaging device. In this paper, we present an algorithm to register and stitch multiple light fields. We utilize the regularity of the spatial and angular sampling in light field data and extend some techniques developed for stereo vision systems to light field data. Such an extension is not straightforward for a micro-lens array (MLA) based light field camera due to extremely small baseline and low spatial resolution. By merging multiple light fields captured by an MLA based camera, we obtain larger synthetic aperture, which results in improvements in light field capabilities, such as increased depth estimation range/accuracy and wider perspective shift range.

Light field imaging involves capturing both angular and spatial distribution of light; it enables new capabilities, such as post-capture digital refocusing, camera aperture adjustment, perspective shift, and depth estimation. Micro-lens array (MLA) based light field cameras provide a cost-effective approach to light field imaging. There are two main limitations of MLA-based light field cameras: low spatial resolution and narrow baseline. While low spatial resolution limits the general purpose use and applicability of light field cameras, narrow baseline limits the depth estimation range and accuracy. In this paper, we present a hybrid stereo imaging system that includes a light field camera and a regular camera. The hybrid system addresses both spatial resolution and narrow baseline issues of the MLA-based light field cameras while preserving light field imaging capabilities.