Some time ago I tried whether analytical calculation of the partial 
derivatives is better (faster convergence). I wrote code for analytical 
calculation (via automatic differentiation), but it seemed that there was 
no advantage (longer time and little or no reduction in the number of 
iterations).

But maybe your idea yields better results ...

Florian Königstein schrieb am Dienstag, 9. August 2022 um 18:12:09 UTC+2:

> The image DSC00293.JPG was missing. I took the image DSC00289.JPG as a 
> (false) replacement for it.
> First, there are about 20 CPs with distances of about >9000. They are 
> totally wrong correspondences. I deleted them.
>
> I tested optimization only on Hugin++. It took about 10 seconds on my 
> computer (starting from the yaw, pitch and roll values that are already 
> calculated). I also optimized the translation. It seems that in this case 
> there are no problems optimizing both rotation and translation. The average 
> error was about 29.
> Then I resetted all positions to zero and restarted optimizing rotations 
> and transations. It takes longer time and gives terrible results, and the 
> errors are varying if you do it again. Normally this should not happen 
> because it's a deterministic algorithm. But maybe SuiteSparse uses some 
> algorithms that are "slightly" not deterministic.
>
> And in ill-conditioned mathematical problems like here small differences 
> in roundoff errors can lead to totally different results.
>
> I see no problem that different versions of Hugin / Hugin++ give different 
> results: It's just because the optimization problem is ill-conditioned.
>
>
> [email protected] schrieb am Dienstag, 9. August 2022 um 14:43:30 UTC+2:
>  
>
>> I half remember seeing some user option somewhere for how hard the 
>> optimizer tries before stopping. 
>
>
> E.g. in the file optimize.c of libpano / fastPTOptimizer in the function 
> RunLMOptimizer() there are the following lines:
>
>         LM.ftol     =     1.0e-6; // used to be DBL_EPSILON; //1.0e-14;
>         if (istrat == 1) {
>             LM.ftol = 0.05;  // for distance-only strategy, bail out when 
> convergence slows
>         }
>
> You could e.g. change LM.ftol = 1.0E-6 to LM.ftol  = 1.0E-14 if you want 
> the optimizer to try "harder" and longer finding an optimum (in strategy 2).
>
> To the very limited extent to which I understand lev-mar stopping rules, 
>> it might stop because it hit
>> A: A limit in the number of iterations
>> B: A limit in the number of function evaluations
>> C: A limit in the rate of change of the parameter values
>> D: A limit in the rate of change of the total error
>> ?: There seem to be several other possibilities.
>>
>> The code seems to report which of those it was to the caller.  I haven't 
>> yet figured out how to get any of that reported to the user.
>>
>>
> Yes, the function lmdif_sparse() returns a code that indicates the 
> stopping criterion that was met. The meaning of the codes is described in 
> the file levmar.c.
>
> Without having checked it, I don't believe that B is met. I believe that 
> ftol is reached (return code 1).
>
>  
>
>> Another BIG sidetrack I'll likely take:  During high school in the 70's, 
>> I invented a way to compute partial derivatives other than either the 
>> finite difference or analytical.  It gives more correct partial derivatives 
>> than finite difference but doesn't have the potential complexity explosion 
>> of analytical.  After using it in a couple work situations years later, I 
>> took a job on a team that was already using the exact same method, and 
>> later interacted with other teams (within a big employer) that also 
>> independently came up with it.  (Despite that I've never found a 
>> description of it online and don't know what it is called).  On a basic 
>> timing level, for N partial derivatives, you do N+1 times as much work 
>> during one evaluation instead of N+1 times as many function evaluations for 
>> finite difference.  Depending on other factors the total time might range 
>> from twice as long as finite difference down to a small fraction as long.  
>> Usually it is done for accuracy, not time.  I think pano13 doesn't need 
>> that improved.  But taking advantage of several images per lens in hugin 
>> would cause my method to take significantly less time than finite 
>> difference.  If I do that, I should remember to kludge the counter of 
>> function evaluations to pretend it is doing N+1 times as many as it 
>> actually is, both in order to keep the stopping condition reasonable and to 
>> keep result accuracy comparable.  I first wrote that in APL and later in 
>> C.  But it is really ugly code in C and I won't do that again now that 
>> there is a choice.  The only decent language for it is C++.  It is really 
>> annoying that libpano13 is coded in C.  (I don't still have any of the code 
>> and only the APL version ever belonged to me).
>>
>>
> I don't know exactly your idea about computing the partial derivatives, 
> but I think fastPTOptimizer does it quite well.
> Before I used the function splm_intern_fdif_jac() (see it's description in 
> levmar.c), but then I used another method inside adjust.c that is at least 
> as fast as splm_intern_fdif_jac().
>
>

-- 
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
--- 
You received this message because you are subscribed to the Google Groups 
"hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/hugin-ptx/bea68838-9062-40c7-94ae-080723b670d6n%40googlegroups.com.

Reply via email to