l filter
down in a few days.
Regards,
Jim Barber
Senior Systems Administrator
Primary Health Care Limited
Innovations & Technology Group
--
Live Security Virtual Conference
Exclusive live event will cover all the
es.d/80-drivers.rules
SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST!="[module/sg]", \
RUN+="/sbin/modprobe -b sg"
You you probably put that rule into your own local file underneath the
/etc/udev/rule.d/ directory.
Failing that, m
ol
WHERE
Job.Level = 'F'
and Job.Type = 'B'
and Job.JobStatus = 'T'
and Pool.Name = 'FullPool'
and Job.PoolId = Pool.PoolId
GROUP BY
Job.Name
ORDER BY
MAX(Job.JobId);
Regards,
--
Jim Barber
DDI
nly change to my original configuration files I needed to
make was to Catalog {} section setting the correct
database parameters.
- Start up the bacula director and storage daemon.
- Test.
I hope this is helpful to someone out there.
Obviously you should probably test this on a virtual machi
On 18/02/2011 7:28 PM, Dan Langille wrote:
> On 2/17/2011 8:49 PM, Jim Barber wrote:
>> On 17/02/2011 6:54 PM, Torsten Maus wrote:
>>>
>>> My idea is that I copy the LAST Full Backups of ALL Clients to Tape.
>>> Hence, since I am not familiar with the SQL co
x27;B' and Job.JobStatus = 'T' GROUP BY Job.Name ORDER BY Job.JobId;
Regards,
--
Jim Barber
DDI Health
--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pin
read from.
When a VirtualFull backup starts, it generates a list of volumes that it needs
to read from.
It will then choose a volume not in the read list to be the destination.
Regards,
--
Jim Barber
DDI Health
rotation.
There were a few tricks related to them as well.
Mainly to do with the job not being allowed to run a SQL query to
gather the list of latest full backups at the time it is scheduled
as opposed to when it can run.
I solved this with an admin job to kick off the Cop
Pool.PoolId GROUP BY Job.Name ORDER BY Job.JobId;"
Allow Duplicate Jobs = yes
Allow Higher Duplicates = no
}
The Admin job is the one I put in the schedule behind a Virtual Full backup job
that I run at the end of each week.
That way the "OffsiteBackup" job doesn't get a ch
Jim Barber wrote:
>
> Thanks Martin.
>
> I've compiled and installed version 3.1.6 from a git pull I did on 10th Dec.
> I'm not sure if this new version will crash or not.
> But I've manually attached a gdb session to it just in case it does.
>
> Than
27;ve compiled and installed version 3.1.6 from a git pull I did on 10th Dec.
I'm not sure if this new version will crash or not.
But I've manually attached a gdb session to it just in case it does.
Thanks.
--
Jim Barber
DDI Health
---
usr/bin
So /usr/sbin is in the PATH and so I'd imagine the program should be able to
find the traceback program.
Any ideas how I can get some useful information from the crash?
--
--
Jim Barber
DDI Health
--
J
s
as well to only store the data in the partition, rather than the whole
partition byte for byte.
Example from the manual:
Include {
Options { signature=MD5; sparse=yes }
File = /dev/hd6
}
Regards,
enerate the patch from that.
If it works in 3.0.3 then I guess it is likely to work in the git version as
well...
I'll test various scenarios over the next week or two as I get time and if all
works for me I'll re-submit to bacula-devel with my findings
Thanks Arno.
Is this better?
If so I'll clean out all other text except the feature request and submit it to
bacula-devel
--
Item ?: Allow Schedule Resource to override 'Next Pool'
Date: 18 November 2009
Origin: Jim Barber. jim.bar...@ddihealth.c
ED tapes throughout my library.
Regards,
--
--
Jim Barber
DDI Health
--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focu
e moment I have a really badly hacked up configuration to try and achieve
what I want by using each drive in the library independently.
It is complicated and messy with lots of work arounds for various scenarios.
If the above patch is
nally, if I were to change the Storage definitions in my pools, would I need
to do any sort of updates to the volumes to reflect the change?
From what I can tell, drive information isn't stored against the media in the
database, only the pool, so I think I might be safe, but just want to dou
mented out for now since I'm not sure if I need to restrict this or not.
# Maximum Volumes = 22
Storage = TL2000
# Get tapes from scratch pool and return them to the scratch pool when they
are purged.
Scratch Pool = Scratch
Recycle Pool = Scratch
# The location where the copies go for of
"Next Pool" pointing to itself.
But then, would I be able to override the "Next Pool" directive somehow for the
Copy job?
Since I'd want off-site tapes to be written to their own tapes separate from
the other on-site backups.
Regards,
--
Jim Barber
DDI Health
Jim B
.
Can anyone see why the VirtualFull backup can't find the previous backup jobs?
The "FullBackup" job is referring to the "Default" pool where the incremental
backups have been going.
The first ever incremental backups to the "Default" pool created Full backups
as ex
21 matches
Mail list logo