I am not sure, but having done 'cat bacula.sql |more' I believe there are
table recreation commands in the Bacula catalog backups. I suspect this
would be unnecessary, but you are probably on the right track if you ran
into trouble of that sort.

Robert Gerber
402-237-8692
r...@craeon.net

On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson <winstonia...@gmail.com>
wrote:

> Hello Rob
>
> This all went fairly easily as you suggested it would. All the original
> tables were deleted, recreated and populated with the prior contents.
> That's very nice but it hasn't fixed the problem. Backups are still running
> at <2MB/s as before so I'm no further forward, but at least I didn't FUBAR
> :).
>
> I didn't reinstall a virgin postgres prior to this, so my next step will
> be to do that and repeat the above.
>
> Would it fail if no tables exist? If so, I could use the bacula create
> tables script first.
>
> -Chris
>
> On Wed, 7 Aug 2024, 13:01 Rob Gerber, <r...@craeon.net> wrote:
>
>> In my experience running bacula 13.0.x with postgres, catalog restoration
>> has been generally fairly easy. I am not a database expert at all.
>>
>> From memory, on my phone, while on vacation, so double check stuff
>> yourself:
>>
>> Take a new catalog backup unless you are absolutely certain that your
>> most recent catalog backup contains the most up to date information.
>> Restore the most recent catalog backup. Will get a bacula.sql file. This
>> will be a text file that is a dump of your catalog with all necessary
>> commands to drop old bacula db tables, create correct new tables, and
>> repopulate all tables with your database entries. Copy this file somewhere
>> safe. /tmp is not sufficiently safe IMO.
>> This file will need to be accessible from the postgres user, so change
>> user or group or chmod it appropriately.
>> Might need to set a default shell for the postgres user. Google that,
>> can't recall how.
>> Su into postgres user:
>> su postgres -
>> As postgres:
>> psql bacula > bacula-manual-dump.sql #dump bacula db manually, just in
>> case. This probably doesn't contain scripts to drop old bacula tables, etc.
>> psql bacula < your-bacula-backup-as-restored-from-bacula.sql # restore
>> your bacula catalog backup.
>>
>> Process should run, dropping tables for bacula db, then recreating them
>> with all relevant information. Modify process as needed to meet your needs.
>> Maybe practice in test environment before running in production, depending
>> on your comfort level.
>>
>> Overall, though, my experience has been that restoring a postgres db for
>> bacula is fairly easy.
>>
>>
>> Robert Gerber
>> 402-237-8692
>> r...@craeon.net
>>
>> On Wed, Aug 7, 2024, 5:37 AM Chris Wilkinson <winstonia...@gmail.com>
>> wrote:
>>
>>> Hello Bill.
>>>
>>> 'SpoolAttributes' is either unspecified or '= yes' and 'SpoolData' is
>>> not used anywhere. The 'big' jobs where this hurts have specifically
>>> 'SpoolAttributes = yes', 'SpoolData' not specified.  That seems fine as the
>>> default for 'SpoolAttributes' is yes.
>>>
>>> I do have 'Accurate = yes' in all jobs but it's been that way for a long
>>> time. This problem has surfaced in the last month but looking back over
>>> earlier job logs some signs of slowdown were apparent earlier.
>>>
>>> The 'nuclear' option might be to reinstall postgres from scratch and
>>> restore the catalog. I've never had to restore a catalog before and it
>>> seems a bit difficult 😕 . I'm not even sure that would be advisable or
>>> necessary, why not just start all the backups again from zero. I'm not
>>> worried about losing earlier file versions. And while I'm at it, upgrade to
>>> v15 well 😀.
>>>
>>> An example job that runs slow is:
>>>
>>> Job {
>>>   Name = "media"
>>>   Description = "media not pictures or music"
>>>   Type = "Backup"
>>>   Level = "Full"
>>>   Messages = "Standard"
>>>   Storage = "dns-325-sd"
>>>   Pool = "media-full"
>>>   FullBackupPool = "media-full"
>>>   IncrementalBackupPool = "media-incremental"
>>>   DifferentialBackupPool = "media-differential"
>>>   Client = "usb16tb-fd"
>>>   Fileset = "media"
>>>   Schedule = "media"
>>>   Where = "/"
>>>   WriteBootstrap = "/var/lib/bacula/%n.bsr"
>>>   Replace = "Never"
>>>   MaxFullInterval = 5184000
>>>   MaxDiffInterval = 2678400
>>>   PruneJobs = yes
>>>   PruneFiles = yes
>>>   PruneVolumes = yes
>>>   Enabled = yes
>>>   SpoolAttributes = yes      <<<<<<
>>>   Runscript {
>>>     RunsWhen = "After"
>>>     FailJobOnError = no
>>>     RunsOnClient = no
>>>     Command = "/home/pi/run-copy-job.sh %n-copy %l %n-%l %n-copy-%l"
>>>   }
>>>   MaximumConcurrentJobs = 5
>>>   RescheduleIncompleteJobs = no
>>>   Priority = 10
>>>   AllowIncompleteJobs = no
>>>   Accurate = yes
>>>   AllowDuplicateJobs = no
>>> }
>>>
>>>
>>> -Chris
>>>
>>> On Tue, 6 Aug 2024, 23:49 Bill Arlofski via Bacula-users, <
>>> bacula-users@lists.sourceforge.net> wrote:
>>>
>>>> On 8/6/24 9:01 AM, Chris Wilkinson wrote:
>>>> > I've had v11/postgresql13 running well for a long time but just
>>>> recently it has started to run very slow. The Dir/Fd is on a
>>>> > Raspberry PiB with 8GB memory, Sd on a NAS mounted via CIFS over a
>>>> Gbe network. I was getting a rate of ~30MB/s on the backup
>>>> > but this has dropped to ~1-2MB/s. I can see similar values on the
>>>> network throughput page of Webmin. Backups that used to
>>>> > take 10h are now stretching out 10x and running into next scheduled
>>>> backups. Jobs do eventually complete OK but are much too
>>>> > slow.
>>>> >
>>>> > It remains the same after a couple of reboots of both the Pi and NAS.
>>>> >
>>>> > I've tried my usual suite of tools e.g. htop, iotop, glances, iostat,
>>>> iperf3 but none of these are raising any flags. Iowait
>>>> > is < 2%, cpu < 10%, swap is 0 used, free mem is > 80%. Iperf3 network
>>>> speed testing Dir<=>Fd is close to 1Gb/s, rsync
>>>> > transfers Pi>NAS @ 22MB/s, so I don't suspect a network issue.
>>>> >
>>>> > On the NAS, I have more limited tools but ifstat shows a similarly
>>>> low incoming network rate. No apparent issues on cpu load,
>>>> > swap, memory, disk either. fsck ran with no errors.
>>>> >
>>>> > I thought maybe there was a database problem so I've also had a try
>>>> at adjusting PostgreSQL conf per the suggestions from
>>>> > Pgtune but to no effect. Postgresqltuner doesn't reveal any problems
>>>> with the database performance. Postgres restarted of course.
>>>> >
>>>> > Backup to S3 cloud is also slow by about 3x. It runs 25MB/s (22Mb/s
>>>> previously) into local disk cache and then 2MB/s to cloud
>>>> > storage v. 6MB/s previously. My fibre upload limits at 50Mbs. I would
>>>> have expected that a database issue would impact the
>>>> > caching equally but that doesn't seem to be the case.
>>>> >
>>>> > So the conclusions so far are that it's not network and not database
>>>> 🤔.
>>>> >
>>>> > I'm running out of ideas now and am hoping you might have some.
>>>> >
>>>> > -Chris Wilkinson
>>>>
>>>> Hello Chris,
>>>>
>>>> This is a long shot, but is there *any* chance you have disabled
>>>> attribute spooling in your jobs? (SpoolAttributes = no)
>>>>
>>>> If this is disabled, then the SD and the Director are in constant
>>>> communication and for each file backed up the SD sends the
>>>> attributes to the Director and the Director has to insert the record
>>>> into the DB as each file is backed up.
>>>>
>>>> With attribute spooling enabled (the default), the SD spools them
>>>> locally to a file, then sends this one file at the end of
>>>> the job and the Director batch inserts all of the attributes at once
>>>> (well, in one batch operation)
>>>>
>>>> Crossing my fingers on this one.🤞 :)
>>>>
>>>>
>>>> Best regards,
>>>> Bill
>>>>
>>>> --
>>>> Bill Arlofski
>>>> w...@protonmail.com
>>>>
>>>> _______________________________________________
>>>> Bacula-users mailing list
>>>> Bacula-users@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>>
>>> _______________________________________________
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to