The most important use case for this idea would also depend on support for 
low priority control points, which IIUC is in a fork of Hugin that I 
haven't had time to look at yet.

Assume that control points are very accurately placed, but still don't 
optimize very well.  So the remapped images don't align very well.  For 
high contrast areas, anything worse than a single pixel of misalignment is 
a very poor alignment.  The usual Hugin answers to that seem to be based on 
avoiding blending high contrast areas.  Sometimes that is a good solution.  
Sometimes it isn't.

One cause of major misalignment is the combination of translation (moving 
the point of view between images, rather than using a tripod with a 
perfectly adjusted nodal slide) with subjects of the photo being at 
significantly varying distance.

The translation optimization necessarily depends on all connections between 
two images (including indirect connections through other images) being at 
the same distance from the camera.  For multiple subject distances, there 
is no correct remapping for translation.

Assume either user action (or maybe some automation I haven't though of) 
lowers the priority of all control points that connect two images at other 
than the most important subject distance.  Then the alignment for that 
distance is great, but other distances are a problem.

If high contrast important features at other than the preferred distance 
are continuous beyond the width of an image, there is no decent solution:  
warping would distort the shape, while any other approach would blur the 
seams.

But more often, the high contrast important features outside the preferred 
distance are isolated.  If those features are tagged with low priority 
control points, and the less important areas between them don't contain 
many control points, then the necessary shape distortion of fine tuning by 
warp would land in the less important areas.

Some reshaping is fundamentally necessary for that combination of viewpoint 
shift plus multi-distance.  Blurring across a seam can concentrate the 
reshaping where it matters least.  But in many cases I think warping would 
do a better job with less user effort.

In case it wasn't obvious, my idea is to compute all the remapping implied 
by the optimization results (with prioritized control points) then apply 
additional remapping by warping to move the control point whatever extra is 
needed to get from close to perfect.  Then compute the resulting pixels 
based on the complete movement and blending or original pixels.

I probably won't take on a chunk of work this big.  But I'm thinking about 
it (for after the easier enhancements I want to make to Hugin).  But I am 
considering and would appreciate feedback as if I would do it.

Without having tried it, my intuition as a user of Hugin is that it would 
be enough better that I'm surprised it wasn't designed this way.  Feel free 
to tell me what I might be misunderstanding.


-- 
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
--- 
You received this message because you are subscribed to the Google Groups 
"hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/hugin-ptx/6c5cf1c6-4153-4bf4-affa-17f04d11bb0an%40googlegroups.com.

Reply via email to