Hi,
28.09.2007 08:00,, Gaurav Pruthi wrote::
> Hi,
>
> I am using bacula and taking backup using file backup. The backup file
> has reached 600 GB. I created new volumes and fixed the limit of jobs
> per volume but still backup and going on the first volume file. Is there
> any harm if my volu
Hi,
28.09.2007 06:51,, Jeff K wrote::
>
>
> I have tried various changes to the File= definition, I really want to
> backup everything below username.
>
>
>
> Fileset Definition for this pc, hp_notebook:
>
>
>
> FileSet {
>
> Name = "HP_Notebook Documents"
>
> Include {
>
> O
Hello,
28.09.2007 05:07,, Ryan Novosielski wrote::
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Fredrik Gidensköld wrote:
>> Im runing Bacula under Debian and MySQL 5.0.x. and it worked just fine.
>>
>> Until I used "dselect" to uppgrade Bacula from 2.0.X to 2.2.0 and I got
>> this pro
Hello,
27.09.2007 22:47,, Ross Boylan wrote::
> On Thu, 2007-09-27 at 09:19 +0200, Arno Lehmann wrote:
>> Hi,
>>
>> 27.09.2007 01:17,, Ross Boylan wrote::
>>> I've been having really slow backups (13 hours) when I backup a large
>>> mail spool. I've attached a run report. There are about 1.4M fi
hi Jeff,
that looks right, do you have any more information in the bacula logs than the
message on the console?
if not, you may want to turn on the debug option on the client (you'll have to
look for that in the documentation as i don't remember) to get more information
about the problem.
one f
Hi,
I am using bacula and taking backup using file backup. The backup file has
reached 600 GB. I created new volumes and fixed the limit of jobs per volume
but still backup and going on the first volume file. Is there any harm if my
volume file grows more in size. The partition where this file res
I have tried various changes to the File= definition, I really want to
backup everything below username.
Fileset Definition for this pc, hp_notebook:
FileSet {
Name = "HP_Notebook Documents"
Include {
Options {
Compression = GZIP
Signature = MD5
Exclude =
I was not able to respond to the corresponding thread so here is the answer to
the problem with upgrading to Bacula 2.2.4 from the src.rpm file and the
catalog backup no longer working (although I originally assumed it to be a
problem with the RunBeforeJob directive).
In all my previous upgrad
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Fredrik Gidensköld wrote:
> Im runing Bacula under Debian and MySQL 5.0.x. and it worked just fine.
>
> Until I used "dselect" to uppgrade Bacula from 2.0.X to 2.2.0 and I got this
> problmen.
>
> First I run Bacula as user 'bacula' and got this er
Once the file-volume has been purged/pruned/recycled, the next job that
uses it theoretically opens it for write/overwrite at 0-byte descriptor.
Its just a PITA waiting for that -- of course if it was a real tape, it
would have to "mt erase" the volume, but for virtual tape files, a
feature to zer
Hello,
I just upgraded the installed port from 2.2.4 to 2.2.4_1 and still i am
getting the same error msg. Any other suggestions?
Thanks.
Dave.
- Original Message -
From: "Landon Fuller" <[EMAIL PROTECTED]>
To: "Dave" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday, September 26, 2007 1:08
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Blastwave has 2.0.3 so far.
Brian A Seklecki (Mobile) wrote:
> www.sunfreeware.com should have packages for you -- if they don't, be
> sure to harass them until they do. That place has responsible for
> keeping Solaris usable for the last 8 years :)
1) Be sure that the messages resource on the SD is setup to report
errors back to the DIR. You should get them via e-mail/console then
2) Be sure to run "btape test" (see manual) to ensure that your
drive/media combination is correct
3) Run the SD with foreground+debugging information
4) Try se
www.sunfreeware.com should have packages for you -- if they don't, be
sure to harass them until they do. That place has responsible for
keeping Solaris usable for the last 8 years :)
~BAS
On Thu, 2007-07-19 at 11:13 +0200, Volker Lieder wrote:
> Hello,
> i have solved the issue,
> my installatio
On Thu, 2007-09-27 at 09:19 +0200, Arno Lehmann wrote:
> Hi,
>
> 27.09.2007 01:17,, Ross Boylan wrote::
> > I've been having really slow backups (13 hours) when I backup a large
> > mail spool. I've attached a run report. There are about 1.4M files
> > with a compressed size of 4G. I get much b
> On Mon, 24 Sep 2007 16:44:00 -0700, Elie Azar said:
>
> Hi,
>
> I'm kind of new to Bacula, and definitely to Vchanger, so please bear
> with me.
>
> I want to implement vchanger on a set of hard drives, with the idea that
> I will have a pool of, let's say, 10 drives available for backup
Original Message -
From: "Steve Thompson" <[EMAIL PROTECTED]>
To:
Sent: Thursday, September 27, 2007 10:57 AM
Subject: [Bacula-users] RunAfterJob in bacula 2.2.4
>I recently upgraded bacula from 2.0.3 to 2.2.4 on my director system
> (CentOS 4.5 i686). Since then, the RunAfterJob can n
[EMAIL PROTECTED] (Eric Böse-Wolf) writes:
> Chris Hoogendyk <[EMAIL PROTECTED]> writes:
>
>> Based on the discussion here, I created an account for myself on the
>> bacula docu wiki. Then I edited a page, and also browsed through
>> everything. It says something that I was able to browse throug
Chris Hoogendyk <[EMAIL PROTECTED]> writes:
> Based on the discussion here, I created an account for myself on the
> bacula docu wiki. Then I edited a page, and also browsed through
> everything. It says something that I was able to browse through
> everything -- what is there now is very limit
I recently upgraded bacula from 2.0.3 to 2.2.4 on my director system
(CentOS 4.5 i686). Since then, the RunAfterJob can no longer be
successfully started (not even once):
27-Sep 11:36 XXX-dir: AfterJob: run command "/etc/bacula/after_catalog_backup"
27-Sep 11:36 XXX-dir: AfterJob: Bad address
T
Elie Azar wrote:
> Hi Josh,
>
> Thank you very much for your kind reply...
>
> So, what is the point of using vchanger then; we can get the same
> result using pools, I think.
There is a script called disk-changer that is packaged with Bacula that
does much the same thing and is what vchanger w
On Wed, 2007-09-26 at 21:26 -0500, Drew Bentley wrote:
> On 9/26/07, Ross Boylan <[EMAIL PROTECTED]> wrote:
> > I've been having really slow backups (13 hours) when I backup a large
> > mail spool. I've attached a run report. There are about 1.4M files
> > with a compressed size of 4G. I get muc
I've gotten almost 1 TB on a LTO2 tape. It was filled with daily incremental
jobs which was mostly highly compressible log files. From what I've gathered
the Volume Bytes are uncompressed bytes. Since the compression is done on
the hardware, Bacula doesn't know the 'true' bytes. It would be nice if
On 9/27/07, hgrapt <[EMAIL PROTECTED]> wrote:
>
> I'm using a Quantum Autoloader with LTO 3 tapes (400/800 GB) with
> HW-compression on.
>
> I'm just wondering if the output from bacula is correct ?
>
> "Volume Bytes: 1,470,728,448,000 (1.470 TB)"
>
I believe so. It said that Bacula wrote 1.47TB t
I'm using a Quantum Autoloader with LTO 3 tapes (400/800 GB) with
HW-compression on.
I'm just wondering if the output from bacula is correct ?
"Volume Bytes: 1,470,728,448,000 (1.470 TB)"
It's still writing
Thank you
--
View this message in context:
http://www.nabble.com/LTO-3%2C-Volume
Hello everybody,
at the moment there is no BartPE / PEBuilder plugin for bacula in the
official release, so I wrote a small Howto create your own and put it
on the wiki under HOWTO's.
Maybe someone could take a look at it and correct my poor english :-)
Yours sincerely,
Eric
pgp1ezrJSlpbG.pgp
Hi,
27.09.2007 13:23,, Mateus Interciso wrote::
> Hello, I'll be formating the server that stores the director and main
> storage daemons, what kind of backups should I make so that I don't loose
> any data? And how to restore it after the formating is complete?
This is mainly a desaster recove
I'm making a research to supported LTO-4 autoloaders that works fine
with bacula. HP Storage works MSL2024 and 4048 works ok, I 've founded
in list archives.
But I didn't found about the new LTO-4 Dell autoloaders, TL2000 and
TL4000. Someone use this hardware with bacula? Is it works o
Hello, I'll be formating the server that stores the director and main
storage daemons, what kind of backups should I make so that I don't loose
any data? And how to restore it after the formating is complete?
Thanks a lot.
Mateus
---
On Wed, 26 Sep 2007 18:40:47 +0200 Eric Bollengier <[EMAIL PROTECTED]> wrote:
> I'm currently working on porting brestore to a web/ajax interface, i have
> good results, and i think i will provide it in 1 or 2 month.
That would be great, i'll keep looking in the list for an anouncement then!
---
Hello,
27.09.2007 09:53,, Emil Noether wrote::
> Hi,
> I am going to upgrade from 1.38->2.2.4. I read release notes and mailing
> lists
> but I have a question: Is it possible to upgrade only director, catalog and
> storage?
That *should* work, but nobody will promise that.
> I have a lot of
Hi,
I am going to upgrade from 1.38->2.2.4. I read release notes and mailing lists
but I have a question: Is it possible to upgrade only director, catalog and
storage? I have a lot of clients and I dont want to uprgrade them. I think
that I dont use any special functions of file daemon except Ru
Hi,
27.09.2007 01:17,, Ross Boylan wrote::
> I've been having really slow backups (13 hours) when I backup a large
> mail spool. I've attached a run report. There are about 1.4M files
> with a compressed size of 4G. I get much better throughput (e.g.,
> 2,000KB/s vs 86KB/s for this job!) with o
33 matches
Mail list logo