On Fri, 2007-10-05 at 18:57 -0400, Steve Thompson wrote:
> On Wed, 26 Sep 2007, Ross Boylan wrote:
>
> > I've been having really slow backups (13 hours) when I backup a large
> > mail spool. I've attached a run report. There are about 1.4M files
> > with a compressed size of 4G. I get much bett
On Wed, 26 Sep 2007, Ross Boylan wrote:
> I've been having really slow backups (13 hours) when I backup a large
> mail spool. I've attached a run report. There are about 1.4M files
> with a compressed size of 4G. I get much better throughput (e.g.,
> 2,000KB/s vs 86KB/s for this job!) with othe
Here are the results after moving the postgres database to another disk:
Initial jobs were like the ones at the end of my earlier report,
involving the directories with c 4,000 files.
93 seconds first try (277kb/s)
20 seconds 2nd try (1679kb/s)
Then I switched to the one I used at the beginning of
In-Reply-To=46DFBF2F.6040304%40its-lehmann.de
Hi,
06.09.2007 09:22,, Ruben Lopez wrote::
>> Hi,
>>
>> Is there any way of knowing the progress of one backup?
>
> Yes... try 'sta sd=' and 'sta
> client=' in a console.
>
>> I mean something
>> like "copied 27 of 145 files" or "copied 50 of 145 GB"
Hi list,
Implemented a workaround succesfully. Just in time. 10
minutes spare! Thanks for your time. Hope to give more
feedback on monday.
gr,
Olaf
Don't let your dream ride pass you by. Make it a reali
> Huh. I've written some scripts that do some slightly funky stuff to get
> around the "blocking" effect of doing an "umount" from bconsole.
>
> So I wonder: Will the "release" command unload the tape, put it away and all
> that jazz?
>
It does that for me, well at least from version 2.0.X and ab
> Thank for the reply. Are you managing those files by hand ( vi ) or
> in some other fashion?
I use nano and and a lot of copying my previous configs. For the most
part I only add a few clients at one time so this usually is no big
deal.
> I have a couple of other admins that will be
> working
Hi all,
I was thinking about the best way to keep a backup of my director
configuration ready on a client machine, and decided that it would be
the easiest if I scheduled a nightly restore job for the catalog and
the config files, bootstraps, etc. onto that client. In other words,
the files would
Hello Everyone.
I'm using Tray Monitor on a server that runs Director at my site. The Tray
Monitor does not correctly report the Scheduled Jobs, Running Jobs or
Terminated Jobs in the DIR section. The FD and SD sections show accurate
information.
I'm hoping that I've done something stupid, but
On Thursday 04 October 2007 11:27:04 am John Drescher wrote:
> Use the release command instead of unmount
>
> John
Huh. I've written some scripts that do some slightly funky stuff to get
around the "blocking" effect of doing an "umount" from bconsole.
So I wonder: Will the "release" command un
Hi list,
I have the following issue:
SYSTEM
Running Debian Etch on a server called "backupmaster"
with Bacula 2.0. I have several Windows servers
running Bacula-fd too. The backupmaster has a schedule
for all servers to backup their data with jobs to a
local disk on the backupmaster server. I use
Hi Damian,
I'm new to Bacula but was considering a similar setup. I think you are
misunderstanding the meaning of FullPool:
"FullPool=Full
specifies to use the Pool named Full if the job is a full backup,
or is upgraded from another type to a full backup."
I've found this here, btw:
http://
Hi, I recently upgraded to Bacula 2.2.2 on my Solaris 10 installation to
benefit from the Batch insert feature.
I installed the mysql thread safe client libraries and built bacula.
The output of the build says "Batch Insert Enabled : yes".
I don't see performance improvements, so I'm guessin if th
Hello !
I need some advice concerning schedules and definitions of pools.
My idea is to do as follows:
I want to have 4 tapes Monday-Thursday for differential backups and I
need 10 tapes for full backups on Fridays (2 tapes per weekend, up to 5
fridays). I need a 4 weeks of full backup of every
--- Rich <[EMAIL PROTECTED]> wrote:
> On 2007.10.05. 17:30, Foo Bar wrote:
> ...
> >
> > 'Remove Volume After' is of no use since I want to keep the volume,
> just
> > control its contents.
>
> actually, if you would limit each volume to a single job, it would
> nicely achieve the goal, i thin
Hi John,
Thank for the reply. Are you managing those files by hand ( vi ) or
in some other fashion? I have a couple of other admins that will be
working on this as well and I'm going to try and put together a CGI
or PHP app to add / remove Clients, Pools and such. Just wondering
if you
Thanks Arno,
You're write, scripting is probably the best way to go here. I was
pretty tired last night but now with a clear head I realize I really
only have 5 types of machines meaning their configs will be identical
except for the hostname which should make the process much easier.
Than
Hi,
I've noticed that with spooling enabled I see wrong status information about a
job in the 'Terminated Jobs' field.
*status client=-fd
Connecting to Client -fd at :9102
-fd Version: 2.2.4 (14 September 2007) x86_64-pc-linux-gnu debian 4.0
Daemon started 02-Okt-07 22:24, 5 Jo
On 2007.10.05. 17:30, Foo Bar wrote:
...
>>> What I would like to do is to keep for example 2 full backups, and as
>>> soon as the third is made the oldest one should be deleted from disk
>>> (not just the job/volume entries in the catalog).
>> it seems that you might be interested in feature reque
--- Rich <[EMAIL PROTECTED]> wrote:
> i don't know how feasible would this be with current bacula formats -
> such a storage system would be a huge benefit for hdd based systems, but
> might require serious changes.
Indeed, I'm backing up to disk for the time being, not tape.
> > What I would
Hi, I recently upgraded to Bacula 2.2.2 on my Solaris 10 installation to
benefit from the Batch insert feature.
I installed the mysql thread safe client libraries and built bacula.
The output of the build says "Batch Insert Enabled : yes".
I don't see performance improvements, so I'm guessin if th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
There are ways to produce logs. You'd have to refer to the documentation
- -- it is there (also in the list archives many times).
Marek Simon wrote:
> The problem is solve. John Drescher sent me an older version of bacula
> client and it runs good, b
On 2007.10.05. 13:20, Foo Bar wrote:
> If I want to use a single volume per client, how do I delete old backups
> from it in a FIFO manner without using dates or size? Can Bacula seek back
> and resize a volume at all or does it need to recycle it?
i don't know how feasible would this be with curr
If I want to use a single volume per client, how do I delete old backups
from it in a FIFO manner without using dates or size? Can Bacula seek back
and resize a volume at all or does it need to recycle it?
What I would like to do is to keep for example 2 full backups, and as soon
as the third is m
Hello,
I am trying to restore my bacula catalog from some tapes inside an
autochanger. There are seven tapes in the changer and I want to restore
all the backup sessions to a clean catalog.
# bscan -c /etc/bacula/bacula-sd.conf -v -V
L20001\|L20002\|L20003\|L20004\|L20005\|L20006\|L20007 /dev/ns
Lucio Crusca wrote:
> Hello *,
>
> Bacula 2.0.3/Debian here, using DVD and full/month+diff/week+incr/day
> schedule. In order to always have the full backup on a single media, I need
> to set the volume status as "Full" before the next 1st monday of each month.
> Now I guess there could be at l
Hello *,
Bacula 2.0.3/Debian here, using DVD and full/month+diff/week+incr/day
schedule. In order to always have the full backup on a single media, I need
to set the volume status as "Full" before the next 1st monday of each month.
Now I guess there could be at least two ways to do that:
1. se
Hello,
I'm using Bacula 2.0.3 (Dir / SD / FD ) run on a Windows NT 4.0 Sp6 Fr
Server and the volumes (more than 50 Go) are on an ethernet disk.
When I try to restore files from the Full Backup i have some message like
that :
05-Oct 09:59 444MW001-sd: Ready to read from volume "FQZ-0001" on device
Hi,
Arno Lehmann schrieb:
>> If so where can I find this temporary fileson
>> Windows and Linux systems? Is there a max. size for the temporary
>> storage or is it all done on-the-fly so no additional storage space is
>> needed?
>
> The FD does not use the gzip program but uses the gzip algorith
29 matches
Mail list logo