Unfortunately this is not an option for us, as we don't have enough storage to not truncate the cache. At the moment it's at 2TB and as I just witnessed this is not enough. I have yet to fully understand what is happening ie. if nothing is truncated and volumes just get purged and reused or if some but not all volumes are truncated.

Performance is not an issue as our rgw/s3 is still pretty fast so I'd rather have everything fetched from there and not run out of diskspace on the sd/cache. I also get occasional upload failures but so far bacula always re-uploaded those parts automatically and the backups were good.

For the record, I seem to have "solved" my "Volume XXX does not exist. ERR=No such file or directory" errors after clearing the cloudcache with rm -rf. It seems to be enough to create a directory with the respective volume name in the cloudcache directory and then the restore will start and fetch everything that is needed from the cloudstorage.

Martin



On 16.01.24 22:38, Chris Wilkinson wrote:
I tend to set the cache to not truncate. This is for two reasons. 1) restores will have to retrieve data from cloud making it slower than local storage and 2) I get occasional failures to upload and then the cache is no longer available to correct this.

After each cloud backup I run a runafter job to upload the cache again. This isn't always needed of course but serves as belt and braces. It doesn't seem to consume any resources most of the time since the S3 driver seems to check the sync state first and only upload the bad or missing data.


-Chris Wilkinson

On Tue, 16 Jan 2024, 21:19 Martin Reissner, <mreiss...@wavecon.de <mailto:mreiss...@wavecon.de>> wrote:

    Hello,

    we've recently switched some of our backups from using disk/filestorage
    to cloudstorage in our own ceph rgw/s3 and this has been working great
    so far, but today I ran into two cache related issues which are giving
    me a headache.

    Firstly somehow the server running the sd which does the uploads to the
    rgw/s3 ran out of diskspace because of the cloudcache directory.
    Unfortunately I didn't notice it filling up because of a
    misconfiguration in our monitoring but as I had configured "Truncate
    Cache = AfterUpload" for all Cloud ressources I expected the cache
    to be
    self-regulating. Either I don't understand the "Truncate Cache" option
    correctly or it doesn't work properly in our setup.

    Secondly, being a bit in panic I deleted everything in the cloudcache
    directory, thinking that is is just a cache but now I run into errors
    when restoring data from volumes that are supposed to be in the cache.
    Is there a way to reset the cache so everything is fetched from the
    cloudstorage? "cloud truncate" unfortunately does not work as it
    complains with "Volume XXX does not exist. ERR=No such file or
    directory"

    We're running version 13.0.2 on Debian 11 and I'd be really grateful
    for
    some help as this is preventing all but manual restores for all the
    volumes that were cached in the cloudcache.

    Regards,

    Martin


    _______________________________________________
    Bacula-users mailing list
    Bacula-users@lists.sourceforge.net
    <mailto:Bacula-users@lists.sourceforge.net>
    https://lists.sourceforge.net/lists/listinfo/bacula-users
    <https://lists.sourceforge.net/lists/listinfo/bacula-users>


--
Wavecon GmbH

Anschrift:      Thomas-Mann-Straße 16-20, 90471 Nürnberg
Website:        www.wavecon.de
Support:        supp...@wavecon.de

Telefon:        +49 (0)911-1206581 (werktags von 9 - 17 Uhr)
Hotline 24/7:   0800-WAVECON
Fax:            +49 (0)911-2129233

Registernummer: HBR Nürnberg 41590
GF:             Cemil Degirmenci
UstID:          DE251398082



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to