The solution I came up with is to add a new storage resource pointing to
the same storage device just for remote client backup. Thus one has the
'FDStorageAddress =' directive. I removed this directive from the storages
used for local backup and copy jobs. This appears to have fixed it. Local
backu
My post got bounced because of the attachment. It may get passed through
eventually but here again without.
-Chris
On Sat, 10 Aug 2024, 16:42 Chris Wilkinson, wrote:
> I thought I had this fixed but not quite. Backups have gone back to the
> slow speed. The culprit is the "FDStorageAddress = FQ
The way I have it set up is
storage=“sd backup storage” defined in the backup jobs, undefined in the backup
pools (job overrides)
storage=“sd copy/migrate storage” undefined in the copy/migrate jobs, defined
in the copy/migrate pools (pool overrides always)
Best
-Chris-
> On 9 Aug 2024, at
On 8/9/24 4:51 AM, Chris Wilkinson wrote:
>> Just an aside - I realised whilst editing the jobs that the storage=“sd used for
backup jobs" should be specified in the Job
resource, it’s not necessary (or desirable) to specify the storage in the Pool
as well since the job overrides the pool.
J
Update
The problem is resolved now I think. I reinstalled Postgres, db, tables etc.
and the result was no better. I then stood up a new SD/NAS and adjusted the
jobs to backup to this. The result is backups @~30MB/s, where it was
previously. There is something gone wrong with the prior NAS that
On 08/08/2024 04:27, Rob Gerber wrote:
I am not sure, but having done 'cat bacula.sql |more' I believe there
Please don't promulgate useless uses of "cat".
Either "more bacula.sql" or "less bacula.sql".
Cheers,
GaryB-)
The results after dropping and reinstating the db + reloading the prior sql
dump show no improvement at all, in fact a bit slower. ☹ They are around
1MB/s.
I did a second test where the data is on the Pi SD card. This was also
~1MB/s so that result seems to rule out the HDD as source of the bottle
No worries, I've cleared out the db, run the postgres db scripts and
imported the sql dump. It's up and running again and all the jobs etc.
appear intact. Doing some testing so will report results back.
-Chris
On Wed, 7 Aug 2024, 20:58 Bill Arlofski, wrote:
> On 8/7/24 1:11 PM, Chris Wilkinson
On 8/7/24 1:11 PM, Chris Wilkinson wrote:
And then import the saved sql dump which drops all the tables again and
creates/fills them?
-Chris
Hello Chris!
My bad!
I have been using a custom script I wrote years ago to do my catalog backups. It uses what postgresql calls a custom (binary)
fo
That's an option, but if you wanted to confirm that db performance after
your catalog filled up wasn't a factor you could run a backup first. Maybe
isolate / protect your current volumes first? Not sure how to go about
that. Maybe check permission mask for future restoration and 'chmod 000
Your-vol
I ran the smartctl short test and that came back good. Smartctl isn't
reporting any errors or degraded test results, so it looks good. There was
nothing flaky on the SD end either.
On Wed, 7 Aug 2024, 20:23 Rob Gerber, wrote:
> What about filesystem performance on your volume or db host(s)? Fail
What about filesystem performance on your volume or db host(s)? Failing
drives get slower sometimes.
Robert Gerber
402-237-8692
r...@craeon.net
On Wed, Aug 7, 2024, 3:21 PM Rob Gerber wrote:
> That's an option, but if you wanted to confirm that db performance after
> your catalog filled up wasn
And then import the saved sql dump which drops all the tables again and
creates/fills them?
-Chris
On Wed, 7 Aug 2024, 19:39 Bill Arlofski via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:
> On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson
> wrote:
> >
> > Would it fail if no tables e
On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson wrote:
Would it fail if no tables exist? If so, I could use the bacula create tables
script first.
Hello Chris,
The Director would probably not even start. :)
If you DROP the bacula database, you need to run three scripts:
- create_bacula_data
I am not sure, but having done 'cat bacula.sql |more' I believe there are
table recreation commands in the Bacula catalog backups. I suspect this
would be unnecessary, but you are probably on the right track if you ran
into trouble of that sort.
Robert Gerber
402-237-8692
r...@craeon.net
On Wed,
Hello Rob
This all went fairly easily as you suggested it would. All the original
tables were deleted, recreated and populated with the prior contents.
That's very nice but it hasn't fixed the problem. Backups are still running
at <2MB/s as before so I'm no further forward, but at least I didn't F
In my experience running bacula 13.0.x with postgres, catalog restoration
has been generally fairly easy. I am not a database expert at all.
>From memory, on my phone, while on vacation, so double check stuff yourself:
Take a new catalog backup unless you are absolutely certain that your most
rec
Hello Bill.
'SpoolAttributes' is either unspecified or '= yes' and 'SpoolData' is not
used anywhere. The 'big' jobs where this hurts have specifically
'SpoolAttributes = yes', 'SpoolData' not specified. That seems fine as the
default for 'SpoolAttributes' is yes.
I do have 'Accurate = yes' in al
On 8/6/24 9:01 AM, Chris Wilkinson wrote:
I've had v11/postgresql13 running well for a long time but just recently it has started to run very slow. The Dir/Fd is on a
Raspberry PiB with 8GB memory, Sd on a NAS mounted via CIFS over a Gbe network. I was getting a rate of ~30MB/s on the backup
but
I've had v11/postgresql13 running well for a long time but just recently it
has started to run very slow. The Dir/Fd is on a Raspberry PiB with 8GB
memory, Sd on a NAS mounted via CIFS over a Gbe network. I was getting a
rate of ~30MB/s on the backup but this has dropped to ~1-2MB/s. I can see
simi
20 matches
Mail list logo