A few more thoughts:
---
multiblend is exceptionally fast compared to enblend (maybe 10x to 20x 
faster, as I recall).   I found in most situations it produces output just 
as visually appealing as enblend.  I can't remember the details now, but a 
couple of years ago I noticed some seam issues in some cases.  It is worth 
a look.
https://horman.net/multiblend/
---
I've been doing some OpenCL coding recently and have a general idea how the 
GPU processing works:
There was a question upstream about how GPUs are utilized.  Basically the 
same way as a multi-core processor with multiple threads.  The application 
asks to use the GPUs. There is a mid-level management layer that is 
coordinating all the GPU requests.   What can slow things down the most in 
GPU-land (I suspect) is when multiple apps are all heavily using the GPUs, 
and there is a lot of data being moved back and forth between general CPU 
memory area and the GPU memory.

If you are also running something like darktable (or Adobe ...) you 
ABSOLUTELY should be looking at the graphics chip.  For example my Lenovo 
laptop cost about 30% more and got me the NVIDIA GeForce GTX 1050 Ti chip, 
which is (was!) on the low-end of the scale.   But it has 1024 GPUs, and 
16gig of GPU memory.  darktable operations will literally run 100x faster 
for operations that are coded specifically for the GPUs.
---
As GnomeNomad said:   SSD - yes.  SSD+NVME -- double yes.
---
(If you aren't already doing this)  If finding control point matches is 
really slow, perhaps you could narrow down that process to only look for 
matches between images you *know* overlap, creating a lot of intermediate 
pto files, and then use pto_merge as a final step.
---
This info is a little out of date, but many years back when running highly 
CPU intensive processes (linux) we found that turning off the virtual CPUs 
in the BIOS was a performance gain:

  - The virtual CPU's are handy when you have something like a web-server, 
where there is a plenty of idle time between processing requests, and 
thread creation/deletion is using extra time.

  - But if everything is being performed on the same system CPUs, with data 
sitting in memory (SSD/NVME), only using the physical cores results in 
noticeable gain in processing time compared to virtual cores -- because  in 
the virtual core case threads are being swapped in/out of the physical 
cores (context switching overhead) which eats into the processing time with 
no actual gain in amount of data processed.   I don't remember the exact 
gain, but I think it was in the range of 20%
---
Hope that helps
On Thursday, January 11, 2024 at 3:19:44 AM UTC-8 E Kow wrote:

> Hi,
>
> As mentioned earlier I am often stitching 500 or more microscope images. 
> I am thinking to get a new dedicated computer for this. 
> How much computing power can Hugin utilize (RAM, GPU etc)?
> Does it make sense to buy a really high spec desktop computer with high 
> end graphics card?
>

-- 
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
--- 
You received this message because you are subscribed to the Google Groups 
"hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/hugin-ptx/5a5235d8-567e-4cdf-8c1c-bf6a7fb0ec1dn%40googlegroups.com.

Reply via email to