On 1/24/25 13:22, Bill Arlofski via Bacula-users wrote:
I will let a developer comment on this as it is "Above my pay grade"™, but I
can say that I have seen horrific backup
performance with attribute spooling disabled. It has been very many years since
I last even tried, but apparently it is not
(was not?) very efficient to write the attributes of each file to the catalog
in real time as the backup is running.
The problem is that as it works now, those 'spooled' attributes are
'spooled' by writing them to a temporary table ANYWAY. And if that
table contains even a single BLOB/TEXT table or is larger than
TMP_TABLE_SIZE, which defaults to 16K, it will be forced to disk.
It would be better to 'spool' them either to a temporary file or to an
in-memory store, then burst-write that cache to the catalog.
I once tried looking into rewriting it myself, but I don't understand
enough C++ or the architecture of the Bacula MySQL driver.
this huge atomic write *also* makes it incompatible with Galera 3 clusters.)
This is interesting and useful news to me because we have prospects recently
coming to us asking about Galera clusters. Thank
you. :)
Yeah, the limitation is that Galera 3 has a hard limit of 128K records
(and if I remember correctly, 4GB total volume) on replicated writes.
This limit is eliminated in Galera 4 which can do arbitrary-size
streaming writes, but MySQL/MariaDB clustering does not yet use Galera 4.
I can tell you that with attribute spooling *disabled*, it works very
well on a Galera back-end.
--
Phil Stracchino
Fenian House Publishing
ph...@caerllewys.net
p...@co.ordinate.org
Landline: +1.603.293.8485
Mobile: +1.603.998.69
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users