Hello Bill.

'SpoolAttributes' is either unspecified or '= yes' and 'SpoolData' is not
used anywhere. The 'big' jobs where this hurts have specifically
'SpoolAttributes = yes', 'SpoolData' not specified.  That seems fine as the
default for 'SpoolAttributes' is yes.

I do have 'Accurate = yes' in all jobs but it's been that way for a long
time. This problem has surfaced in the last month but looking back over
earlier job logs some signs of slowdown were apparent earlier.

The 'nuclear' option might be to reinstall postgres from scratch and
restore the catalog. I've never had to restore a catalog before and it
seems a bit difficult 😕 . I'm not even sure that would be advisable or
necessary, why not just start all the backups again from zero. I'm not
worried about losing earlier file versions. And while I'm at it, upgrade to
v15 well 😀.

An example job that runs slow is:

Job {
  Name = "media"
  Description = "media not pictures or music"
  Type = "Backup"
  Level = "Full"
  Messages = "Standard"
  Storage = "dns-325-sd"
  Pool = "media-full"
  FullBackupPool = "media-full"
  IncrementalBackupPool = "media-incremental"
  DifferentialBackupPool = "media-differential"
  Client = "usb16tb-fd"
  Fileset = "media"
  Schedule = "media"
  Where = "/"
  WriteBootstrap = "/var/lib/bacula/%n.bsr"
  Replace = "Never"
  MaxFullInterval = 5184000
  MaxDiffInterval = 2678400
  PruneJobs = yes
  PruneFiles = yes
  PruneVolumes = yes
  Enabled = yes
  SpoolAttributes = yes      <<<<<<
  Runscript {
    RunsWhen = "After"
    FailJobOnError = no
    RunsOnClient = no
    Command = "/home/pi/run-copy-job.sh %n-copy %l %n-%l %n-copy-%l"
  }
  MaximumConcurrentJobs = 5
  RescheduleIncompleteJobs = no
  Priority = 10
  AllowIncompleteJobs = no
  Accurate = yes
  AllowDuplicateJobs = no
}


-Chris

On Tue, 6 Aug 2024, 23:49 Bill Arlofski via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:

> On 8/6/24 9:01 AM, Chris Wilkinson wrote:
> > I've had v11/postgresql13 running well for a long time but just recently
> it has started to run very slow. The Dir/Fd is on a
> > Raspberry PiB with 8GB memory, Sd on a NAS mounted via CIFS over a Gbe
> network. I was getting a rate of ~30MB/s on the backup
> > but this has dropped to ~1-2MB/s. I can see similar values on the
> network throughput page of Webmin. Backups that used to
> > take 10h are now stretching out 10x and running into next scheduled
> backups. Jobs do eventually complete OK but are much too
> > slow.
> >
> > It remains the same after a couple of reboots of both the Pi and NAS.
> >
> > I've tried my usual suite of tools e.g. htop, iotop, glances, iostat,
> iperf3 but none of these are raising any flags. Iowait
> > is < 2%, cpu < 10%, swap is 0 used, free mem is > 80%. Iperf3 network
> speed testing Dir<=>Fd is close to 1Gb/s, rsync
> > transfers Pi>NAS @ 22MB/s, so I don't suspect a network issue.
> >
> > On the NAS, I have more limited tools but ifstat shows a similarly low
> incoming network rate. No apparent issues on cpu load,
> > swap, memory, disk either. fsck ran with no errors.
> >
> > I thought maybe there was a database problem so I've also had a try at
> adjusting PostgreSQL conf per the suggestions from
> > Pgtune but to no effect. Postgresqltuner doesn't reveal any problems
> with the database performance. Postgres restarted of course.
> >
> > Backup to S3 cloud is also slow by about 3x. It runs 25MB/s (22Mb/s
> previously) into local disk cache and then 2MB/s to cloud
> > storage v. 6MB/s previously. My fibre upload limits at 50Mbs. I would
> have expected that a database issue would impact the
> > caching equally but that doesn't seem to be the case.
> >
> > So the conclusions so far are that it's not network and not database 🤔.
> >
> > I'm running out of ideas now and am hoping you might have some.
> >
> > -Chris Wilkinson
>
> Hello Chris,
>
> This is a long shot, but is there *any* chance you have disabled attribute
> spooling in your jobs? (SpoolAttributes = no)
>
> If this is disabled, then the SD and the Director are in constant
> communication and for each file backed up the SD sends the
> attributes to the Director and the Director has to insert the record into
> the DB as each file is backed up.
>
> With attribute spooling enabled (the default), the SD spools them locally
> to a file, then sends this one file at the end of
> the job and the Director batch inserts all of the attributes at once
> (well, in one batch operation)
>
> Crossing my fingers on this one.🤞 :)
>
>
> Best regards,
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to