The solution I came up with is to add a new storage resource pointing to the same storage device just for remote client backup. Thus one has the 'FDStorageAddress =' directive. I removed this directive from the storages used for local backup and copy jobs. This appears to have fixed it. Local backups run at full speed and remote backups are working again.
-Chris- On Sat, 10 Aug 2024, 16:47 Chris Wilkinson, <winstonia...@gmail.com> wrote: > My post got bounced because of the attachment. It may get passed through > eventually but here again without. > > -Chris > > On Sat, 10 Aug 2024, 16:42 Chris Wilkinson, <winstonia...@gmail.com> > wrote: > >> I thought I had this fixed but not quite. Backups have gone back to the >> slow speed. The culprit is the "FDStorageAddress = FQDN" directive in the >> SD, necessary to enable remote client backup. >> >> After setting up a replacement storage device, one job for a remote >> client was not working. That was because I had mistakenly omitted >> FdStorageAddress="FQDN of SD" in the storage resources (x2, one for >> backups, one for copy). I corrected this and the remote client backup >> worked again. >> >> However, the side effect of this was to drastically slow down all backups >> again. To check this I took out the "FDStorageAddress = xx" directive from >> the SD again and backups returned to normal speed. Remote client backup >> doesn't work now. >> >> It seems all my previous troubles were due to moving a client offsite 😂. >> >> If you can see the attached screenshot of DIR+SD network traffic, you can >> see the low outgoing speed, traffic both ways and then the full speed with >> little incoming traffic after the directive was removed and Bacula >> restarted. >> >> -Chris >> >> On Fri, 9 Aug 2024, 11:51 Chris Wilkinson, <winstonia...@gmail.com> >> wrote: >> >>> Update >>> >>> The problem is resolved now I think. I reinstalled Postgres, db, tables >>> etc. and the result was no better. I then stood up a new SD/NAS and >>> adjusted the jobs to backup to this. The result is backups @~30MB/s, where >>> it was previously. There is something gone wrong with the prior NAS that I >>> haven’t been able to get to the bottom of so I’ll just junk it and move on. >>> >>> Just an aside - I realised whilst editing the jobs that the storage=“sd >>> used for backup jobs" should be specified in the Job resource, it’s not >>> necessary (or desirable) to specify the storage in the Pool as well since >>> the job overrides the pool. This doesn’t seem to be the case for >>> Copy/Migrate Jobs, the storage=“sd used for copy jobs" has the be specified >>> in every Pool used for copy jobs. Am I right that there is no equivalent >>> override mechanism for Copy/Migrate jobs? >>> >>> Best >>> -Chris- >>> >>> >>> >>> >>> On 7 Aug 2024, at 23:06, Chris Wilkinson <winstonia...@gmail.com> wrote: >>> >>> The results after dropping and reinstating the db + reloading the prior >>> sql dump show no improvement at all, in fact a bit slower. ☹ They are >>> around 1MB/s. >>> >>> I did a second test where the data is on the Pi SD card. This was also >>> ~1MB/s so that result seems to rule out the HDD as source of the bottleneck. >>> >>> I think that leaves Postgres as the only possible culprit left. >>> >>> Thank you all for your suggestions. >>> >>> -Chris >>> >>> On Wed, 7 Aug 2024, 21:23 Chris Wilkinson, <winstonia...@gmail.com> >>> wrote: >>> >>>> No worries, I've cleared out the db, run the postgres db scripts and >>>> imported the sql dump. It's up and running again and all the jobs etc. >>>> appear intact. Doing some testing so will report results back. >>>> >>>> -Chris >>>> >>>> On Wed, 7 Aug 2024, 20:58 Bill Arlofski, <w...@protonmail.com> wrote: >>>> >>>>> On 8/7/24 1:11 PM, Chris Wilkinson wrote: >>>>> > And then import the saved sql dump which drops all the tables again >>>>> and creates/fills them? >>>>> > >>>>> > -Chris >>>>> >>>>> Hello Chris! >>>>> >>>>> My bad! >>>>> >>>>> I have been using a custom script I wrote years ago to do my catalog >>>>> backups. It uses what postgresql calls a custom (binary) >>>>> format. It's typically faster and smaller, so I switched to this >>>>> format more than 10 years ago. I had not looked at an ASCII >>>>> dump verison in years and I just looked now, and it does indeed DROP >>>>> and CREATE everything. >>>>> >>>>> So, the only thing you needed to do was create the database with the >>>>> create_bacula_database script, then >>>>> >>>>> Sorry for the static. :) >>>>> >>>>> >>>>> Best regards, >>>>> Bill >>>>> >>>>> -- >>>>> Bill Arlofski >>>>> w...@protonmail.com >>>>> >>>>> -Chris Wilkinson >>> >>> >>> -Chris Wilkinson >> >
_______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users