The results after dropping and reinstating the db + reloading the prior sql
dump show no improvement at all, in fact a bit slower. ☹ They are around
1MB/s.

I did a second test where the data is on the Pi SD card. This was also
~1MB/s so that result seems to rule out the HDD as source of the bottleneck.

I think that leaves Postgres as the only possible culprit left.

Thank you all for your suggestions.

-Chris

On Wed, 7 Aug 2024, 21:23 Chris Wilkinson, <winstonia...@gmail.com> wrote:

> No worries, I've cleared out the db, run the postgres db scripts and
> imported the sql dump. It's up and running again and all the jobs etc.
> appear intact. Doing some testing so will report results back.
>
> -Chris
>
> On Wed, 7 Aug 2024, 20:58 Bill Arlofski, <w...@protonmail.com> wrote:
>
>> On 8/7/24 1:11 PM, Chris Wilkinson wrote:
>> > And then import the saved sql dump which drops all the tables again and
>> creates/fills them?
>> >
>> > -Chris
>>
>> Hello Chris!
>>
>> My bad!
>>
>> I have been using a custom script I wrote years ago to do my catalog
>> backups. It uses what postgresql calls a custom (binary)
>> format. It's typically faster and smaller, so I switched to this format
>> more than 10 years ago. I had not looked at an ASCII
>> dump verison in years and I just looked now, and it does indeed DROP and
>> CREATE everything.
>>
>> So, the only thing you needed to do was create the database with the
>> create_bacula_database script, then
>>
>> Sorry for the static. :)
>>
>>
>> Best regards,
>> Bill
>>
>> --
>> Bill Arlofski
>> w...@protonmail.com
>>
>> -Chris Wilkinson
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to