OS: Linux version 2.6.18-164.el5 (mockbu...@builder10.centos.org) (gcc
version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Thu Sep 3 03:28:30 EDT
2009
Version: bacula03-dir Version: 5.0.2 (28 April 2010)
x86_64-unknown-linux-gnu redhat
Problem Description:
The director is attempting to use tapes th
On 08/11/10 18:00, Markus Lanz wrote:
> Hi
>
> Is it possible to have the server in the internet and the client in the LAN?
> The client would be behind the firewall and not directly reachable by the
> server. Only the client can reach the server directly?
>
> I'm asking because there are some
Sorry i missed the Suse part...
Thanks
--
Romer Ventura
On Aug 11, 2010, at 2:14 PM, frozensolid wrote:
I have my first backup job running on a Suse Enterprise 10.3 VM. I
started the backup at 1:30 am last night, and at noon it was still
running and had only made about a 7 GB backup fi
Hi
Is it possible to have the server in the internet and the client in the LAN?
The client would be behind the firewall and not directly reachable by the
server. Only the client can reach the server directly?
I'm asking because there are some backup programms where the server is
initiating the
Am 11.08.2010 17:22, schrieb Bruno Friedmann:
> On 08/11/2010 04:36 PM, Pierre Bernhardt wrote:
>> Hello,
>>
>> is it possible to connect bscan to a postgresql database? I try to scan my
>> tapes
>> but bscan could not find the bacuala.db which says to me, that bscan do not
>> try
>> to connect t
I had the same problem: refer to: http://support.citrix.com/article/
CTX124456
Or my post:
http://forums.citrix.com/thread.jspa?threadID=271590&tstart=0
Thanks
--
Romer Ventura
On Aug 11, 2010, at 2:14 PM, frozensolid wrote:
I have my first backup job running on a Suse Enterprise 10.3 VM.
I have my first backup job running on a Suse Enterprise 10.3 VM. I started the
backup at 1:30 am last night, and at noon it was still running and had only
made about a 7 GB backup file. The system's CPU load is 0.28 or less while the
backup is running, and I've yet to see it go above 2% cpu us
On Wed, 11 Aug 2010, John Drescher wrote:
>> Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
>> that returns a non-zero status causes the job to be cancelled. This is not
>> what appears to happen, however. Instead a fatal error is declared:
>
> Maybe the documentation shou
> Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
> that returns a non-zero status causes the job to be cancelled. This is not
> what appears to happen, however. Instead a fatal error is declared:
>
> 11-Aug 13:30 cbe-dir JobId 686: No prior Full backup Job record found.
> 1
Bacula 5.0.2. The documentation states that a ClientRunBeforeJob script
that returns a non-zero status causes the job to be cancelled. This is not
what appears to happen, however. Instead a fatal error is declared:
11-Aug 13:30 cbe-dir JobId 686: No prior Full backup Job record found.
11-Aug 13:
Hi Erik,
Why do you have Maximum Volumes = 9? That will tell Bacula to create at most
9 volumes, but I think you already have more (29?).
I suggest you remove that directive (or set it much higher) if you want Bacula
to label more with the Python script.
__Martin
> On Wed, 11 Aug 2010 13
> Thank you for your suggestion (and sorry for not replying to the
> off-list mail sooner - forgot about it); with PKI off:
>
>
>em0 in 5.189 MB/s 5.206 MB/s 16.899 GB
> out 148.275 KB/s151.429 KB/s
> 508.40
On 08/11/10 11:34, Hugo Silva wrote:
> Christian Gaul wrote:
>> Am 11.08.2010 16:49, schrieb Hugo Silva:
>>> Thomas Mueller wrote:
>>>
Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
> Hello,
>
> I'm backing up a server in Germany from a director in The Netherlan
>
> I had to disable the Maximum Network Buffer Size in the mean time,
> coincidence or not the director started throwing out "unknown errors"
> while connecting to storage, so this test is run with default buffer
> sizes (which shouldn't be a problem - I got 91-93% of the max link
> speed w
Steve Polyack wrote:
> On 08/11/10 11:34, Hugo Silva wrote:
>> Christian Gaul wrote:
>>> Am 11.08.2010 16:49, schrieb Hugo Silva:
Thomas Mueller wrote:
> Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
>
>
>> Hello,
>>
>> I'm backing up a server in Germany
Christian Gaul wrote:
> Am 11.08.2010 16:49, schrieb Hugo Silva:
>> Thomas Mueller wrote:
>>
>>> Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
>>>
>>>
Hello,
I'm backing up a server in Germany from a director in The Netherlands.
Using bacula, I can't seem to get
On Wed, Aug 11, 2010 at 11:13 AM, Romer Ventura wrote:
> No, there was no data in the tape. All tapes were "label barcode" and placed
> in the scratch pool. The 142GB were written by that job. Like you said, it
> seems the same job is running 2 times. I ll wait and see..
You are correct. I answer
On 08/11/2010 04:36 PM, Pierre Bernhardt wrote:
> Hello,
>
> is it possible to connect bscan to a postgresql database? I try to scan my
> tapes
> but bscan could not find the bacuala.db which says to me, that bscan do not
> try
> to connect to the postgresql db I given in the command line:
>
>
No, there was no data in the tape. All tapes were "label barcode" and
placed in the scratch pool. The 142GB were written by that job. Like
you said, it seems the same job is running 2 times. I ll wait and see..
Prior to starting the job:
Pool: Scratch
+-++---+-
Am 11.08.2010 16:49, schrieb Hugo Silva:
> Thomas Mueller wrote:
>
>> Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
>>
>>
>>> Hello,
>>>
>>> I'm backing up a server in Germany from a director in The Netherlands.
>>> Using bacula, I can't seem to get past ~3000KB/s.
>>>
>>> Here's a
> Could it be the schedule..? Well since i havent got a full backup, i
> manually ran the full backup job. Does this mean that since the job
> scheduled for the 1st sunday hadnt run, bacula scheduled and tried to run it
> as if it was the 1st sun?
I think so.
> Shouldnt bacula check for the a ful
Thomas Mueller wrote:
> Am Tue, 10 Aug 2010 15:13:07 +0100 schrieb Hugo Silva:
>
>> Hello,
>>
>> I'm backing up a server in Germany from a director in The Netherlands.
>> Using bacula, I can't seem to get past ~3000KB/s.
>>
>> Here's an iperf result:
>> [ 3] local [fd-addr] port 16625 connected w
Could it be the schedule..? Well since i havent got a full backup, i
manually ran the full backup job. Does this mean that since the job
scheduled for the 1st sunday hadnt run, bacula scheduled and tried to
run it as if it was the 1st sun?
Shouldnt bacula check for the a full backup for the
Hello,
is it possible to connect bscan to a postgresql database? I try to scan my tapes
but bscan could not find the bacuala.db which says to me, that bscan do not try
to connect to the postgresql db I given in the command line:
r...@backup:/etc/bacula# bscan -vv -d 99 -c bacula-sd.conf -n bacula
2010/8/11 Romer Ventura :
> Hello,
> I am running a full back up with a size of "estimate files=983,529
> bytes=306,761,539,624", the backup started normally and used up 1 full tape
> 160GB (uncompressed), then it switched tapes and continued to copy data to
> the second tape, but all of the sudde
> > 11-Aug 01:33 zztop-sd JobId 9: Warning: mount.c:217 Open device
> > "chg0_drive0" (/bacula-storage/chg0/drives/drive0) Volume
> > "bvol_ELAN-OnSite0_003" failed: ERR=dev.c:549 Could not open:
> > /bacula-storage/chg0/drives/drive0, ERR=No such file or directory
> >
> This appears that bacula ca
> Yes, i understand that, but when the tape CNH910 got full, bacula unloaded
> the tape and loaded CNH911, when loading the tape, bacula marked it as
> "used" before writing any data to it.
So you are saying there was 148GB of data on CNH911 before the job
started. Then bacula loaded the volume, m
if you are using file storage, i think it would be ok to create a
volume per client. However, having 1000 volumes as opposed to 100 big
volumes would make a difference..
Thanks
--
Romer Ventura
On Aug 11, 2010, at 9:14 AM, rpere...@lavabit.com wrote:
hello
I'm a newbie bacula user.
Yes, i understand that, but when the tape CNH910 got full, bacula
unloaded the tape and loaded CNH911, when loading the tape, bacula
marked it as "used" before writing any data to it. It did the same
with CNH910. Here is the list of tapes before the job started:
Pool: Scratch
+-+
> 11-Aug 01:33 zztop-sd JobId 9: Warning: mount.c:217 Open device
> "chg0_drive0" (/bacula-storage/chg0/drives/drive0) Volume
> "bvol_ELAN-OnSite0_003" failed: ERR=dev.c:549 Could not open:
> /bacula-storage/chg0/drives/drive0, ERR=No such file or directory
>
This appears that bacula can not access
2010/8/11 Romer Ventura :
> Hello,
> I am running a full back up with a size of "estimate files=983,529
> bytes=306,761,539,624", the backup started normally and used up 1 full tape
> 160GB (uncompressed), then it switched tapes and continued to copy data to
> the second tape, but all of the sudde
hello
I'm a newbie bacula user.
A simple first question.
I should create a backup volumen per client backuped ?
Bacula may have many client's backups in only one volumen ?
I going to use file storage (not tape)
Thanks
roberto
---
Hello,
I am running a full back up with a size of "estimate files=983,529
bytes=306,761,539,624", the backup started normally and used up 1
full tape 160GB (uncompressed), then it switched tapes and continued
to copy data to the second tape, but all of the sudden it stops and
bacula sta
Greetings..
I'm about ready to pull my hair out on this one :-/
My bacula is consistently marking volumes in Error and I'm not sure why.. I had
looked at this a few months ago and it seemed like an issue where some of the
special case code in volume selection was being skipped on File type devi
> For the last problem type mount. If that does not work type status storage
> and select tape and post the output.
Device status:
Autochanger "Autochanger" with devices:
"Tape" (/dev/nst0)
Device "FileStorage" (/tmp) is not open.
Device "Tape" (/dev/nst0) open but no Bacula volume is curre
> On Wed, 11 Aug 2010 01:17:11 -0300, Daniel Bareiro said:
>
> On Wednesday, 11 August 2010 00:46:06 -0300,
> Daniel Bareiro wrote:
>
> > > It looks like you are running a 64-bit bacula-sd, so the libraries
> > > should be in /usr/lib64.
> > >
> > > The files you found are in /usr/lib, which
36 matches
Mail list logo