On Fri, Aug 2, 2024, at 11:03 AM, Marco Gaiarin wrote:
> Mandi! Andrea Venturoli
> In chel di` si favelave...
>
>> Probably yes, but why bother?
>> Just take multiple snapshots: they have by default a different
>> mountpoint and don't take any space.
>
> Right... never minded about that... i've
On 08/08/2024 04:27, Rob Gerber wrote:
I am not sure, but having done 'cat bacula.sql |more' I believe there
Please don't promulgate useless uses of "cat".
Either "more bacula.sql" or "less bacula.sql".
Cheers,
GaryB-)
The results after dropping and reinstating the db + reloading the prior sql
dump show no improvement at all, in fact a bit slower. ☹ They are around
1MB/s.
I did a second test where the data is on the Pi SD card. This was also
~1MB/s so that result seems to rule out the HDD as source of the bottle
No worries, I've cleared out the db, run the postgres db scripts and
imported the sql dump. It's up and running again and all the jobs etc.
appear intact. Doing some testing so will report results back.
-Chris
On Wed, 7 Aug 2024, 20:58 Bill Arlofski, wrote:
> On 8/7/24 1:11 PM, Chris Wilkinson
On 8/7/24 1:11 PM, Chris Wilkinson wrote:
And then import the saved sql dump which drops all the tables again and
creates/fills them?
-Chris
Hello Chris!
My bad!
I have been using a custom script I wrote years ago to do my catalog backups. It uses what postgresql calls a custom (binary)
fo
That's an option, but if you wanted to confirm that db performance after
your catalog filled up wasn't a factor you could run a backup first. Maybe
isolate / protect your current volumes first? Not sure how to go about
that. Maybe check permission mask for future restoration and 'chmod 000
Your-vol
I ran the smartctl short test and that came back good. Smartctl isn't
reporting any errors or degraded test results, so it looks good. There was
nothing flaky on the SD end either.
On Wed, 7 Aug 2024, 20:23 Rob Gerber, wrote:
> What about filesystem performance on your volume or db host(s)? Fail
What about filesystem performance on your volume or db host(s)? Failing
drives get slower sometimes.
Robert Gerber
402-237-8692
r...@craeon.net
On Wed, Aug 7, 2024, 3:21 PM Rob Gerber wrote:
> That's an option, but if you wanted to confirm that db performance after
> your catalog filled up wasn
And then import the saved sql dump which drops all the tables again and
creates/fills them?
-Chris
On Wed, 7 Aug 2024, 19:39 Bill Arlofski via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:
> On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson
> wrote:
> >
> > Would it fail if no tables e
On Wed, Aug 7, 2024, 10:27 AM Chris Wilkinson wrote:
Would it fail if no tables exist? If so, I could use the bacula create tables
script first.
Hello Chris,
The Director would probably not even start. :)
If you DROP the bacula database, you need to run three scripts:
- create_bacula_data
I am not sure, but having done 'cat bacula.sql |more' I believe there are
table recreation commands in the Bacula catalog backups. I suspect this
would be unnecessary, but you are probably on the right track if you ran
into trouble of that sort.
Robert Gerber
402-237-8692
r...@craeon.net
On Wed,
Hello Rob
This all went fairly easily as you suggested it would. All the original
tables were deleted, recreated and populated with the prior contents.
That's very nice but it hasn't fixed the problem. Backups are still running
at <2MB/s as before so I'm no further forward, but at least I didn't F
In my experience running bacula 13.0.x with postgres, catalog restoration
has been generally fairly easy. I am not a database expert at all.
>From memory, on my phone, while on vacation, so double check stuff yourself:
Take a new catalog backup unless you are absolutely certain that your most
rec
Hello Bill.
'SpoolAttributes' is either unspecified or '= yes' and 'SpoolData' is not
used anywhere. The 'big' jobs where this hurts have specifically
'SpoolAttributes = yes', 'SpoolData' not specified. That seems fine as the
default for 'SpoolAttributes' is yes.
I do have 'Accurate = yes' in al
14 matches
Mail list logo