Larry,
I just wanted to comment on your statement about
"it would in theory be possible to extract depth/distance information from
the image.."
I haven't thought about it too deeply, but my intuition tells me that
you would be missing sufficient amount of information to be able to
reconstitute the original image. Let me try to tell you what I mean.
While it is not a complete analogy, you can think of describing the image
that you get on the sensor as a Fourier spectrum of the original object.
You can reconstitute the original object if you have ALL the Fourier
harmonics. For that you'd need to collect the light that was scattered
from the lens in all directions. But because of the finite size of the
sensor, you are catching only a very limited subset of those
"F. harmonics".
So, you can "reconstitute" something, but how well it will reproduce the
original, that "will depend".
I hope this helps,
Igor
Larry Colen Mon, 29 Feb 2016 12:28:23 -0800 wrote:
If you look at the depth of field equations, they are based on whether the
circle of confusion is smaller than the size of the pixel. When you look
at an image on the web (generally about 2MP or below) the effective dof is
a lot greater than if you pixel peep the raw file (16-24 MP or more),
because the smaller files can handle a lot larger circle of confusion than
the raw image.
It seems to me that with sensors so high resolution that they are
diffraction limited at f/5.6 or so, that it would be possible for image
processing software to detect the increasing circle of confusion (at 2 or
3 pixels) with a lot more accuracy than our eyes can, and could therefore
more accurately enhance the effects of depth of field than just applying a
blur to the background of an image.
Likewise, given a good model of the lens (bokeh) it might also be possible
to mathematically increase the depth of field.
A corollary to this is that it would in theory be possible to extract
depth/distance information from the image (though it might be hard to tell
the difference between something being 2 times the focal distance and 1/2
the focal distance.
Are there any hard core signal/image processing nerds on the list who know
anything about work being done on this? It wasn't too long ago that it
would take the sort of processing that only Los Alamos or the NSA had to
do this, but desktop computers are probably running something like 2005
"Craymarks", particularly with GPUs.
--
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow
the directions.