On Tue, Dec 9, 2008 at 11:50 PM, Lukasz Szybalski <[EMAIL PROTECTED]> wrote:
> On Tue, Dec 9, 2008 at 10:45 PM, John Drescher <[EMAIL PROTECTED]> wrote:
>> On Tue, Dec 9, 2008 at 11:41 PM, John Drescher <[EMAIL PROTECTED]> wrote:
I already have a new volume with full backup, so now I want to p
On Tue, Dec 9, 2008 at 10:45 PM, John Drescher <[EMAIL PROTECTED]> wrote:
> On Tue, Dec 9, 2008 at 11:41 PM, John Drescher <[EMAIL PROTECTED]> wrote:
>>> I already have a new volume with full backup, so now I want to purge
>>> the old volume, then delete it and then delete the actual file
>>> (750G
On Tue, Dec 9, 2008 at 11:41 PM, John Drescher <[EMAIL PROTECTED]> wrote:
>> I already have a new volume with full backup, so now I want to purge
>> the old volume, then delete it and then delete the actual file
>> (750GB). Is that how it works?
>>
Here is an example:
jmd1 backups # ls -al
total
> I already have a new volume with full backup, so now I want to purge
> the old volume, then delete it and then delete the actual file
> (750GB). Is that how it works?
>
> purge volume=myoldnotusedvolume
> then
> delete volume=myoldnotusedvolume
> then
delete volume=myoldnotusedvolume
> take our
On Tue, Dec 9, 2008 at 10:30 PM, John Drescher <[EMAIL PROTECTED]> wrote:
>> BTW, the reason for this is purging a volume will not reduce the size
>> of the volume on the disk until the next job is run that uses the new
>> volume. Also this volume will have the old retention period. So to me
>> its
> BTW, the reason for this is purging a volume will not reduce the size
> of the volume on the disk until the next job is run that uses the new
> volume. Also this volume will have the old retention period. So to me
> its best to delete the old and start with new volumes.
>
Slight correction:
purgi
>> So do I do
>> purge volume=myoldvolume
>>
> You probably want delete volume
>
> and after it cleans up delete the volume file from the disk.
>
BTW, the reason for this is purging a volume will not reduce the size
of the volume on the disk until the next job is run that uses the new
volume. Also
> So do I do
> purge volume=myoldvolume
>
You probably want delete volume
and after it cleans up delete the volume file from the disk.
John
--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
T
On Tue, Dec 9, 2008 at 8:34 PM, Lukasz Szybalski <[EMAIL PROTECTED]> wrote:
>> prune
>> The Prune command allows you to safely remove expired database
>> records from Jobs and Volumes. This command works only on the
>> Catalog database and does not affect data written to Volumes. In all
>> cases, t
On Tue, Dec 9, 2008 at 1:04 PM, John Drescher <[EMAIL PROTECTED]> wrote:
>> So what do you think a reasonable cpu for bacula would be?
>>
> Depends on what level of performance you are looking for. My director
> is a 2 processor 2GHz opteron machine (circa 2003) with 4 GB of memory
> and 18 or so
Jeff Kalchik wrote:
>> Hello,
>>
>> I am currently running Ubuntu. The bacula director and sd are on this
>> box. I have bacula configured to spool to this box, and then it goes
>> directly to tape. I want to change how I am backing up. I would like to
>> have some period of time of disk backup
I was looking into making queries for a particular file in a Bacula Postgres
database, and came up with the attached functions. If someone really wanted
to do searches by other attributes, it would be simple to follow the same
pattern. With the functions and functional indexes, a query like
Brian Debelius wrote:
> John Drescher wrote:
>> In linux, I find this to be completely wrong. I have 15TB of software
>> raid 6 and the most load that it puts on the cpu is around 7% and
>> these are raid arrays that net over 200MB/s writes on single core
>> systems that are 3 or so years old.
> So
> prune
> The Prune command allows you to safely remove expired database
> records from Jobs and Volumes. This command works only on the
> Catalog database and does not affect data written to Volumes. In all
> cases, the Prune command applies a retention period to the specified
> records. You can P
> Might be. Currently I try to restart everything every day before the
> backup is supposed to start, and I have the impression that it fails less
> likely.
Today I restarted my server completely and the tape didn't get mounted
either.
I've tried (in bconsole) :
update slots
mount (then I choos
On Tue, Dec 09, 2008 at 09:26:38PM +, Allan Black wrote:
> Jason Dixon wrote:
>> Alas, I spoke too soon. The CatalogBackup job failed again last night,
>> usual symptoms.
>
> OK. need to find out what the FD is doing. I would recommend:
>
> truss -o filename -f -a -e -v all -w 2 -p
>
> Is it
Jason Dixon wrote:
> Alas, I spoke too soon. The CatalogBackup job failed again last night,
> usual symptoms.
OK. need to find out what the FD is doing. I would recommend:
truss -o filename -f -a -e -v all -w 2 -p
Is it possible to run the catalog backup during the day, by hand?
That way you c
On Tue, 9 Dec 2008 21:02:02 +0100
Ralf Gross <[EMAIL PROTECTED]> wrote:
>
> > # grep Maximum /etc/bacula/bacula-*.conf
> > /etc/bacula/bacula-dir.conf: Maximum Concurrent Jobs = 3
> > /etc/bacula/bacula-dir.conf: Maximum Concurrent Jobs = 3
> > /etc/bacula/bacula-fd.conf: Maximum Concurrent J
Alex Chekholko schrieb:
> Doesn't your config do exactly what you want?
>
> > > Storage {
> > > Name = Neo4100
> > > Address =
> > > SDPort = 9103
> > > Password = "wiped"
> > > Device = Neo4100
> > > Media Type = LTO4
> > > Autochanger = yes
> > > Heartbeat Interval = 5min
>
Dear "Jeff Kalchik",
In message <[EMAIL PROTECTED]> you wrote:
>
> *NEVER* use software RAID if you can avoid it. Software RAID puts a
> pretty good hit right on your CPU.
He. So what does me a h/w RAID controller good when I find myself
having the system in 90% I/O wait? On a file serve
John Drescher wrote:
> In linux, I find this to be completely wrong. I have 15TB of software
> raid 6 and the most load that it puts on the cpu is around 7% and
> these are raid arrays that net over 200MB/s writes on single core
> systems that are 3 or so years old.
So what do you think a reasonabl
Hi Ralf,
Doesn't your config do exactly what you want?
> > Storage {
> > Name = Neo4100
> > Address =
> > SDPort = 9103
> > Password = "wiped"
> > Device = Neo4100
> > Media Type = LTO4
> > Autochanger = yes
> > Heartbeat Interval = 5min
> > Maximum Concurrent Jobs = 3
> >
> So what do you think a reasonable cpu for bacula would be?
>
Depends on what level of performance you are looking for. My director
is a 2 processor 2GHz opteron machine (circa 2003) with 4 GB of memory
and 18 or so x 250 GB SATA 1 drives in raid 6. My main storage daemon
is on a second machine w
Normally I would say,
1) Performance generally ie cpu hit is not a huge issue IMHO,
especially with todays dual and quad core cpus...and of course ram is
dirt cheap.
2) For me the big issues is Ive lost software raid sets from power
failures for a backup partition that's probably no bigg
> *NEVER* use software RAID if you can avoid it. Software RAID puts a
> pretty good hit right on your CPU.
>
In linux, I find this to be completely wrong. I have 15TB of software
raid 6 and the most load that it puts on the cpu is around 7% and
these are raid arrays that net over 200MB/s writes o
> Hello,
>
> I am currently running Ubuntu. The bacula director and sd are on this
> box. I have bacula configured to spool to this box, and then it goes
> directly to tape. I want to change how I am backing up. I would like to
> have some period of time of disk backups, and have them
> moved/m
Hi,
I hate to reply to my own mail, but does nobody have an idea about
what I'm trying to do? ;)
If not, I'll submit an feature request. Because I think it is more
important to limit the number of concurent jobs on the drives, than on
the changer.
Ralf Gross schrieb:
>
> Hi,
>
> I have a 3 dr
Hello,
I am currently running Ubuntu. The bacula director and sd are on this
box. I have bacula configured to spool to this box, and then it goes
directly to tape. I want to change how I am backing up. I would like to
have some period of time of disk backups, and have them
moved/migrated/c
I running version 2.4.2 on the server side. I'm running client version 2.0.2
on Windows with Volume Shadow copy enabled. I've been noticing "Could not
stat errors" on a random basis on random computers.
1. When I check the file systems on these computers the files exist and the
permissions are nor
On Tue, 9 Dec 2008 16:03:28 +0100
Tobias Walkowiak <[EMAIL PROTECTED]> wrote:
> I just changed the pool names in the director configuration, deleted the old
> pools and now I can't assign the existing (labeled) tapes to the pools.
>
> 'list volumes' in the bconsole and 'list media' shows no resul
On Fri, Dec 05, 2008 at 10:26:46PM -0500, Jason Dixon wrote:
>
> One final report. Everything has been working fine since switching the
> local FD to use the physical address (bge0) rather than loopback.
> Sounds like a bug. If anyone needs further details, please let me know.
Alas, I spoke too
I just changed the pool names in the director configuration, deleted the old
pools and now I can't assign the existing (labeled) tapes to the pools.
'list volumes' in the bconsole and 'list media' shows no results to any pool.
when trying to 'label pool=IncrementalPool slots=4-6 barcodes' I get th
I just changed the pool names in the director configuration, deleted the old
pools and now I can't assign the existing (labeled) tapes to the pools.
'list volumes' in the bconsole and 'list media' shows no results to any pool.
when trying to 'label pool=IncrementalPool slots=4-6 barcodes' I get th
Hello,
I've a question about Scratch pool.
first lets define pools:
Pool {
Name = PoolA
Pool Type = Backup
Recycle = yes
RecyclePool = Scratch
AutoPrune = yes
Volume Retention = 3 weeks
Volume Use Duration = 1 week
Maximum Volume Bytes = 100G
Label Format = File-
}
Pool {
Nam
Hello List,
I am using rev 2.4.3 since more then a month and I found that some
scheduled jobs disappear completely, as if they where not sheduled at
all.. I was not able to find a rule for that, apart from the fact it
normally happen durin the weekend, when the largest number of jobs are
sched
Hi all,
For my first time I've to manage a bacula server (i'm a really
newbie). The system use usb disk to write the volumes backup and the
catalog.
Last friday i've changed the usb disk because was full. I've done the follow
unmount /dev/...
stop bacula-dir
stop bacula-sd
stop bacula.fd
mount /dev
Stefan Lubitz schrieb:
> Hi all,
>
> sine I have added the Paramters:
> BSFatEOM = yes;
> TWOEOF = yes;
> OfflineonUnmount = yes;
>
> the system reports the error: The number of files mismatch!
> It is every time only one file. For Example:
> Volume=3 Catalog=4 or
> Volume=350 Catalog=351
>
On Mon, 8 Dec 2008, Jesper Krogh wrote:
>> 06-Dec 03:21 bacula-sd JobId 16918: Please mount Volume "A082Y2" or label
>
> I havent had time to dig more into it, but it seems like the storage
> daemon actually cached the slot number from the catalog, making
> update-slots not effective even it ran c
Hi Julien,
this tapes were empty when I added the EOT.
To explain:
I made a full backup to tape mba022, added the EOT and wrote an inc in a
different pool (for example tape mba056). This was working. The next inc
to tape mba056 failed.
Julien Cigar schrieb:
> You can switch EOT model only when
You can switch EOT model only when starting from scratch with empty
tapes, so that's normal ...
On Tue, 2008-12-09 at 08:40 +0100, Stefan Lubitz wrote:
> Hi all,
>
> sine I have added the Paramters:
> BSFatEOM = yes;
> TWOEOF = yes;
> OfflineonUnmount = yes;
>
> the system reports the err
Hi all,
I have a Tapeloader with 2 drives and a backupserver with a huge spool
directory.
Now I thought it would be nice to schedule the different drives and the
time when it spools to have allways a maximum on network load and keep
the time which is needed for the backup as short as possible.
Hi all,
sine I have added the Paramters:
BSFatEOM = yes;
TWOEOF = yes;
OfflineonUnmount = yes;
the system reports the error: The number of files mismatch!
It is every time only one file. For Example:
Volume=3 Catalog=4 or
Volume=350 Catalog=351
Have I missed something in the manual? Is the
Fatal error: File daemon at "192.168.0.13:9102"rejected Hello command
Hi does anyone have any news on this? If not, If a developer could contact
me, I'm willing to lend out a test environment to get this possible bug
fixed, getting full support for win2008 would be great, same with virtual
environ
43 matches
Mail list logo