used as
the order of the files backed up within a job (eg. first file in the job
is "1", second file is "2", and so on), which makes negative numbers
seem like they either have some special meaning, or there is something
else broken here.
Thanks,
Lloyd
Never mind. This was entirely my fault.
At the same time I switched SDs, I also switched FDs. I was trying to
restore, while specifying the old FD.
Nothing to see here.
On 8/24/20 4:35 PM, Lloyd Brown wrote:
> For a couple of hardware-failure reasons, I had to recently change SDs
&g
even a
VirtualFull, since that original Full completed, so I'm pretty sure the
data is intact. I suspect it's just the selection algorithm/query.
This is running Bacula v9.4.3, if that helps.
Thanks,
Lloyd
--
Lloyd Brown
HPC Systems Administrator
Office of Research Comp
81371,181679,182032,182350,182668,182718,183093,183408,183724,184039,184354,184669,184984,185615)
> group by jobid;
>
> __Martin
>
>
>>>>>> On Mon, 30 Sep 2019 13:09:54 -0600, Lloyd Brown said:
>> Hi, all.
>>
>> I may be misunderstanding (or misconf
t config example attached).
I do also have a workaround in mind. I'm just trying to understand if
I've done something wrong, and if not, what the reasoning was about this
being a fatal error. It was certainly unexpected.
This is running on Bacula 9.4.3.
Thanks,
Lloyd
--
Lloyd Brown
believe you can do it with this:
>
> https://www.bacula.org/9.4.x-manuals/en/main/New_Features_in_5_2_13.html#SECTION009510000000
>
> -Jonathan Hankins
>
> On Fri, Jun 14, 2019 at 3:26 PM Lloyd Brown <mailto:lloyd_br...@byu.edu>> wrote:
>
> Is there a
t's something else (eg. Full or VirtualFull), I'd like it to be allowed
to queue up, and wait for the incremental to finish (due to the "Maximum
Concurrent Jobs = 1"), before running.
Is this possible? Is my explanation clear?
Thanks,
Lloyd
--
Lloyd Brown
HPC Systems Admin
n have multiple
> storage devices.
>
> Best regards,
>
> Kern
>
> On 5/31/19 7:38 PM, Lloyd Brown wrote:
>> On 5/31/19 10:46 AM, Martin Simmons wrote:
>>> The "waiting on max Storage jobs" is caused by the Maximum Concurrent Jobs
>>> in
>&g
o that your job can figure out that the files are
moved to the trash folder instead? For example, if files are trashed
after, say, 90 days, then your script could output the list of new
directories to backup, and a list of 5 directories that are 90-95 days old.
Good
e the insight you were looking for, I will try to
re-create it again, and capture the same info for you.
Lloyd
--
Lloyd Brown
HPC Systems Administrator
Office of Research Computing
Brigham Young University
http://marylou.byu.edu
bacula_waiting_for_storage_issue.tar.gz
Descr
torage sections
of both bacula-dir.conf, and bacula-sd.conf. I'm not finding any other
relevant maximum that would apply to storage generally, but I may be
missing something.
This is v9.4.3 for director, SDs, and the currently-running FDs (some
other FDs are older, but they're not in
ely to
not only work, but work well.
Then again, I've never even gotten traditional virtual-full working.
It's been on my todo list for several years now. Go figure.
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu
On 05/22
"
>
> It will run on the client after the restore job.
>
Josip,
Just to be clear, are you defining this in the bacula-dir.conf file, or
as a parameter to the "restore" command in the CLI?
Lloyd
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Bri
ld certainly be a less-obvious way to apply verify
jobs, that I'm not thinking of now. I'll dig into it further.
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu
---
considered just doing a "echo 'wait' | bconsole" or similar, but there's
a possibility of other jobs still running, so I don't necessarily want
to wait for *those*.
Any thoughts/recommendations? I'm coming up empty at the moment.
Thanks,
Lloyd
--
Llo
-+--+---+
> | 8 | BackupClient1 | 2016-04-13 16:56:37 | B| F |3 |
>0 | T |
> +---+---+-+------+---+--+--+---+
>
> Regards,
>
> - Original Message -
>> From: "Heitor Faria"
>
------+--+---+--+--+---+
> | JobId | Name | StartTime | Type | Level | JobFiles |
> JobBytes | JobStatus |
> +---+---+-+--+---+--+--+---+
> | 5 | BackupCl
s not seem accurate to me. Catalog backup job works
> flawless with the same logic and I'm pretty sure Bacula client only scans the
> filesystem after before script is terminated.
> Are you sure your before job script is not deleting the temporary file?
> If still in doubt ma
dations here? Am I completely misunderstanding the situation?
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu
Job {
Name = "zhome_l"
JobDefs = "DefaultJobZHomebackup2"
FileSet = "zhome_files
way of
dividing them up roughly evenly. Then we can have different letters
doing full backups on different days of the month, etc.
There may be a better way than this. It's not as complicated management
as one job per homedir, but not as well-balanced either. Just kinda in
the middle.
On 03/31/2016 12:20 PM, Dimitri Maziuk wrote:
> On 03/31/2016 12:35 PM, Lloyd Brown wrote:
>
>> > ... Unfortunately, I'm
>> > not exaggerating to say that we have users with file profiles that
>> > consist of 35million files (or more) with average file size
hat filesystem, and occasionally causes
problems like directory cache thrashing, etc.
Believe me, I wish it weren't the case. User education is about the
hardest part of my job. Most of the time, the best we can do, is react
when things go badly.
--
Lloyd Brown
Systems Administrator
Fulto
;t really want to have to
restore the user's entire homedir (could be up to 1TB), just to get at
one or two files. Using a filesystem-based view, lets me do both a
single-file restore, and a massive disaster recovery.
Sorry I wasn't clear about those details before. My emails
t autofs mount) the volumes in
question via a pre-job script, and unmount them in a post-job script.
And like I said, I'm happy to share them here once it's done and vetted.
If there are other NFS issues, I have yet to encounter them in my 7+
years of backing up with bacula, but I'd lo
'm
considering right now, but I'm not certain yet.
If there's community interest, once I figure it out completely, I can
probably post example configs and scripts.
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
new FS is mounted in the tree where the backup is occurring?
That's dramatically less likely to happen.
Now if I can figure out a good way to mount everything I need, or
alternatively use an external file or script to define it, I might be in
Okay. I'm hoping that someone can give me some kind of helpful
suggestions here. I'm having trouble figuring out the best way to
manage backups for some automounted user directories.
In short, I need to back up a set of NFS-shared filesystems that
represent users' home directories on my system.
Never mind. Just reread your previous email, and realized you said that
bacula was 20-25x slower than rsync. So definitely look at the DB
tunings, especially if your DB is very large.
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu
o InnoDB tables, and massively increased how much of
the table that the MySQL daemon kept in RAM. It was pretty dramatic.
But then again, we have a very large database, so I'm not sure how
applicable this will be for you.
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brig
oticeably faster. I won't know for sure for a
couple more weeks, when our next big batch of Fulls get started, but so
far it looks promising.
Thanks for all the suggestions. It certainly pointed me in the right
direction. I couldn't have managed this without the community's help.
On 04/19/2014 09:10 PM, Alan Brown wrote:
> On 18/04/14 23:49, Lloyd Brown wrote:
>> Are the tables in MyISAM or InnoDB format?
>> At the moment, MyISAM. I'd wondered if I could safely convert to
>> InnoDB, since that's generally going to have better locking
On 04/18/2014 04:44 PM, Martin Simmons wrote:
> The second query is related to pruning (probably autoprune unless you are
> using the prune command on the console). This is normal while jobs are
> running.
Autoprune. Definitely autoprune.
> Are the tables in MyISAM or InnoDB format?
At the mo
at config
information to share with you, etc.
For reference, this is on a RHEL 6.4 server with a couple of 8-core Ivy
Bridge EP processors, about 128 GB of RAM, and about 75 TB of disk.
Bacula version 5.2.13.
Thanks,
--
Lloyd Brown
Systems Admi
ght approach to take here? Am I missing
something very obvious?
Thanks,
--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu
bacula-sd.conf:
Storage {
Name = backup-sd
SDPort = 9103 # Director's port
34 matches
Mail list logo