Name: | Description: | Size: | Format: | |
---|---|---|---|---|
20.95 MB | Adobe PDF |
Authors
Abstract(s)
This dissertation presents a research work on rendering images from light elds captured
with a focused plenoptic camera with extended depth of eld.
A basic overview of the 7 dimensional plenoptic function is rst given, followed by a
description of the Two-Plane Parametrisation. Some of the various methods used for
sampling the plenoptic functions are then described, namely those equivalent to acquisition
functions implemented by the camera gantry, the unfocused plenoptic camera and
the focused plenoptic camera. State-of-the-art image rendering algorithms have also been
studied both for focused and unfocused plenoptic cameras.
A comprehensive study of the behaviour of focus metrics when applied to images rendered
form a focused plenoptic camera is presented, including 34 of the most widely used metrics
in the literature. Due to high frequency artefacts, caused by the rendering process, it was
found that the currently available focus metrics yield in ated values for this kind of
images, leading to misindication, where worse-focused images have better focus measures.
Subjective tests were carried out, in order to corroborate these results.
Then, methods for minimizing the rendering artefacts are proposed. An algorithm for
choosing the maximum patch size for each micro-image was designed, in order to minimize
the distortions caused by the vignetting e ect of the micro-lens. Then an inpainting
algorithm, based on anisotropic di usion inpainting, is used to minimize the remaining
artefacts present in the borders between adjacent micro-images.
Finally, a method to deal with the redundant information generated by a plenoptic camera
with extended depth of eld is presented. Three di erent views of the same scene are
rendered, with the three di erent types of lenses. Then, it is proven that making any
linear combination of the images always results in worse focus than selecting the better
focused one. Thus, a multi-focus image fusion algorithm is proposed to merge the three
images captured by a extended depth-of- eld camera into a single one, which presents
higher focus level than any of the three individual images.
Description
Keywords
Light field Plenoptic Rendering Al-in-focus Focus Focus metrics Subjective quality assessment Image inpainting PDEs Image fusion Image registration