On 2024-07-15 17:26, Marco Gaiarin wrote:
We have found that a dir (containing mostly home directories) with roughly one and a half million files, took too much time to be backud up; it is not a problem of backup media, also with spooling it took hours to prepare a
spool.

There's some strategy i can accomplish to reduce backup time (bacula
side; clearly we have to work also filesystem side...)?

For example, currently i have:

    Options {
      Signature = MD5
      accurate = sm
    }

if i remove signature and check only size, i can gain some performance?


Hello Marco,

Most probably yes.

If your file system supports it, you could mount the file system with the
noatime and nodiratime options.

Alternatively, Bacula has the noatime option which is not set by default.
That option would prevent Bacula from updating inode atime which would
most probably result in some performance gains (although, not dramatic).
Also, check the option keepatime which could negatively affect the performance
if enabled (it is disabled by default).

A long time ago, I had a FreeBSD with a file system about 30GB in size, with the small usage percentage but with the huge number of indexed directories.
To archive the whole directory structure using tar, it used to take more
than 26 hours.
I solved it by using dump tool which does the backup on the block level thus
doesn't suffer from large directory tree issue.
This approach bears the risk of having inconsistent data in the backup in case where file system is mounted while performing a dump. This could be solved by utilizing snapshots or some type of file system locking/freeze (depending on
the OS and the file system).


Regards

--
Josip Deanovic


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to