Scar,

I can't speak to sqlite because I don't know much about it. However, I
would agree that merely backing up the databases in question could be risky
for the reasons you stated.

Performing incremental backups of the databases in Firefox and Thunderbird
could be a significant challenge. While you might manage to back them up,
the locations and implementations of these databases may change with
updates, potentially disrupting your backup efforts and leaving you
uncertain about their completeness. Overall, it seems like a daunting task
fraught with difficulties due to the unpredictability of upstream changes.

A better practice might be to either
1. ensure that the database won't be written during the backup
  1a. use a 'run  before' script to terminate the processes in question.
(Kind of invasive on a user device)
  1b. schedule backups for a period outside of normal use, and train users
to log off at end of day, ensuring the databases are at rest when backed up.
OR
2. use tools dedicated to this task to export backups of the relevant
thunderbird / firefox profiles.
  2a. I am aware of a tool for thunderbird that can make backups of a
thunderbird profile, Import Export Tools NG
https://addons.thunderbird.net/en-US/thunderbird/addon/importexporttools-ng/
  2b. I am not aware of a tool for firefox that does this, but I haven't
looked.

None of this necessarily helps you with incremental backups, however.

An additional option might be to find other ways to reduce the impact of
these repeated backups. Perhaps use the bacula aligned plugin in concert
with a filesystem supporting deduplication? If you were to do this,
repeated backups of the same data would have much less impact, and it could
be functionally quite a bit like having an incremental backup of the
changes made to a given database file. This could be a big change, so I
understand it might not be your first choice.

Are you using compression in your backups? Compression could substantially
reduce the size of database files. Compression is done on the FD end, so
won't result in substantial server load. In a test I did today, a 678MiB
bacula database dump was compressed to 278.3 MiB. Around a 60% compression
ratio. In my tests, the filesystem option 'compression = ZSTD' was the best
balance of speed and compression, but you can test yourself. Bacula also
supports GZIP (with configurable compression levels) and LZO. Check the
bacula manual for your version of bacula.

I hope this helps. Happy New Year!

Regards,
Robert Gerber
402-237-8692
r...@craeon.net


On Wed, Jan 1, 2025 at 4:51 PM Scar <s...@riseup.net> wrote:

> Hi all,
>
>     Is there a best practice for incrementally backing up /SQLite/
> databases, like there is for /MySQL/ databases (by way of the example
> /MySQL/ backup scripts that ship with /Bacula/)?  I've been trying to
> find a solution for some months now but keep getting side-tracked.
>
>     Specifically, daily incremental backups of desktops, which include
> Thunderbird data, are excessively bloated due to these large /SQLite/
> databases always having daily modifications (such as
> global-messages-db.sqlite or calendar cache.sqlite being several
> gigabytes in my desktop).  Thus multiple gigabytes are backed up
> everyday that don't need to be.  Also, afaik, it's improper to directly
> copy the .sqlite database file anyway (and probably the other related
> .sqlite* files) for the same reasons /MySQL/ database files are not
> directly copied.  Namely, it could lead to corruption if there are write
> operations going on while the data is copied.
>
>     Now that i think about about it, Firefox also uses /SQLite/
> databases for history storage, and more than I ever imagined now that
> I've scanned the profile directory: over 5,000 /SQLite/ databases in my
> system!  It seems they are used for everything from cache storage of
> individual visited websites as well as site preferences, 3rd party
> cookies, etc.
>
>     I don't know if these Mozilla databases need any special
> consideration over other SQLite databases, but thought I should clarify
> where I'm coming from, at least.  I've read about numerous backup
> methods that /SQLite/ has available, yet none seem to accomplish the
> same thing we can do with /MySQL/ for incremental backups, namely: for
> full backups get a full dump of the database (in a human-readable, plain
> text, SQL format), and then for incrementals dump the changes since the
> last dump and backup the output (with binary logging I think?).  For
> example, I've read through this:
>
> sqlite.org
>
> SQLite Backup API <https://sqlite.org/backup.html>
>
> 🔗 https://sqlite.org/backup.html <https://sqlite.org/backup.html>
>
>  and this:
>
> oldmoe.blog
>
> Backup strategies for SQLite in production
> <
> https://oldmoe.blog/2024/04/30/backup-strategies-for-sqlite-in-production/
> >
>
>
> Introduction If you are relying on SQLite as your production database,
> then it makes sense to have some sort of a backup strategy. Those
> backups would come handy in a multitude of situations. For e…
>
> 🔗
> https://oldmoe.blog/2024/04/30/backup-strategies-for-sqlite-in-production/
> <
> https://oldmoe.blog/2024/04/30/backup-strategies-for-sqlite-in-production/
> >
>
>
> and it seems like the only method that would work is to use Litestream,
> but it only sends the backup data to Amazon S3.
>
>     Although, after checking Litestream's Github page, it does mention
> the backup can be sent to another file also, so maybe that is the most
> viable option.  Any other thoughts?
>
> Thanks, Best Regards, Happy New Year, etc.
>
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to