Hi Sam,

Am 15.10.2025 um 18:58 schrieb Samuel Zaslavsky:
Hello everyone,

Some tricky problem here — let me try to explain as briefly as possible.

I skip the details you gave -- the process looks good to me, and I don't see why you would get the results you have.

However, to help us getting to a point where we can have a developer give some hints: First, we would need the exact versions of the DIR, SD and FD involved, and probably where you got them from. If you installed from source or built your own packages, the configure output would be important.

Can you provide the complete job logs of the two backups in question, and can you demonstrate that other files were *not* backed up twice?
list files jobid=2070
list files jobid=30768
I think editing them to some reasonably concise state should be ok.
Then, the selects with sample files from the full job that were *not* backed up twice. Finally, I would suggest doing a stat on the files in question so we can verify that the information in the catalog exactly matches the file system.

All of the above just to be axtra sure the situation is really what you describe, not because I distrust you but because I know how easy it is to miss some details :-)


Then we come to the interesting parts: Can you reproduce the behaviour?

If you run the job the same way 30768 was run, and no files were changed, does it back up the same files again, and how are they recorded in the catalog?

If you can reproduce this, that's great. If things now work as expected, reproducibly, that's kind of great but would probably cause some anxiousness. If the results appear somewhat randomly, I guess there's a lot more work ahead.

If you can reproduce anything incorrect, I would propose to run the FD with debug level 500 and tracing to a file:
setdebug level=500 trace=1 client=<whatever-fd>
run job=CHMPROD level=incremental yes
wait
setdebug level=0 trace=0 client=<whatever-fd>

and see if the trace file contains relevant information. It will be somewhat big but you'll find how the files are searched, found, and processed, and the developers might be able to see where things go wrong.

<snip>


In fact, I moved several folders (each corresponding to a job like CHMPROD) inside the NAS, and I got very inconsistent results — but most of the time, most files are backed up again. I intend to move hundreds of terabytes, so I need to understand how to do it without re-backing up such a huge amount of data.

Inconsitent results are devinitely the worst thing in such a situation, and I think Eric and Co see it the same way, so I'm sure we'll try digging with you!

Cheers,

Arno

Thanks a lot for your help!!

Sam




_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Arno Lehmann

IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück



_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to