Any hint as to get past this restore error?
Thanks -Jason
28-May 20:31 backup-server-sd JobId 67400: End of file 338 on device
"nst0" (/dev/nst0), Volume "AX7321L4"
28-May 20:32 backup-server JobId 67400: Error: attribs.c:423 File size
of restored
file
/fs2/restore2/C
Hi,
28.05.2009 18:24, Olaf Zevenboom wrote:
> Dear List,
>
> I have a FileSet defined. This FileSet is used by a backup job and a
> verify job. However within this FileSet/Verify job there are some oddities.
> Job {
> Name = "Verify_Zim"
> Type = Verify
> Level = VolumeToCatalog
> Client
Hi,
28.05.2009 16:48, Brian Claveau wrote:
> Good Afternoon,
>
>
> I am having a problem with Bacula utilizing all the drives in my
> Quantum Scalar 50 Tape Library. I added a print to /var/log/messages
> to see what was being passed to the mtx-changer script and it only has
> re
Hello,
I have a client script set to run before a job such as:
RunScript {
RunsWhen = Before
FailJobOnError = Yes
Command = "/net/backup/scripts/db/pgsql-bacula.sh %l '%s'"
}
However, the %s substitution made by bacula is always "*none*" even though
several rounds of full, incremental
> That explains the problem. kitsvn001 is the server, and its on local
> network. kitvm001 is in DMZ, so it doesnt "see" kitsvn001. I didnt know
> that the client needs to connect back to the server, isnt the connection
> initiated by kitsvn001, the server?
>
BTW, a solution for this may be an SSH
On Thu, May 28, 2009 at 09:54:44AM -0400, John Drescher wrote:
> On Thu, May 28, 2009 at 8:10 AM, Espen Tagestad wrote:
> > What is 2902, bad storage? The funny thing is that its no problem to
> > back up the local host (kitsvn001-fd) through the same kitsvn001-sd.
>
> The problem is that kitvm00
On Thu, May 28, 2009 at 2:42 PM, Espen Tagestad wrote:
> On Thu, May 28, 2009 at 09:54:44AM -0400, John Drescher wrote:
>> On Thu, May 28, 2009 at 8:10 AM, Espen Tagestad wrote:
>> > What is 2902, bad storage? The funny thing is that its no problem to
>> > back up the local host (kitsvn001-fd) th
I downloaded the LGPL/Free sdk from qt's WEB site to get the newest libraries
and just exported the system path and the QTLIB env so Bacula 3.0.1 would find
the libraries.
http://www.qtsoftware.com/downloads
export QTLIB=/opt/qtsdk-2009.02/qt/lib
The following should be all on one line.
expor
Hi
I am using EXt3
and yes I also have small
Probably
50% < 2M
40% < 10M
10%>40M
On Thu, May 28, 2009 at 8:39 AM, Uwe Schuerkamp wrote:
> On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
> > First of all thank you for the answer
> > No I do not use compression in my file set
> > O
How did you install mysql? I would try source installation and add
--with-mysql=/usr/local/mysql in the configure.
LM
On 5/28/09 11:30 AM, "RAPHAEL AME" wrote:
>
>
> hello
>
> I'm a french User of bacula for several years now...
>
> I try to compile Bacula 3.0.1 in order to install it on
Dear List,
I have a FileSet defined. This FileSet is used by a backup job and a
verify job. However within this FileSet/Verify job there are some oddities.
Job {
Name = "Verify_Zim"
Type = Verify
Level = VolumeToCatalog
Client = zim-fd
FileSet = "Full Set"
Messages = Standard
Storag
hello
I'm a french User of bacula for several years now...
I try to compile Bacula 3.0.1 in order to install it on my server (Fedora
core 4). I use mysql.
The "./configure --with-mysql" works well.
When I start the "make" i get an error while it compile dird and several
other daemons:
/ro
Hi,
i want to compile bacula with bat, but i can not find the needed
'depkgs-qt' to download. is this package still available at sourceforge?
can one of you provide me a direct link to download. thanks in advance.
regards
Markus
ct,
--
Das Abspringen einer Begrenzungsmauer dient nicht dem di
> Good Afternoon,
I am having a problem with Bacula utilizing all the drives in my
Quantum Scalar 50 Tape Library. I added a print to /var/log/messages to
see what was being passed to the mtx-changer script and it only has
references to the 1st drive, never seems to look or ask about either
Previously John Drescher said,
> On Thu, May 28, 2009 at 5:52 AM, C.DriK wrote:
>> > Hello,
>> >
>> > Thank you for your reply.
>> > My bacula configuration is not perfect, and sometimes I have some problem
>> > (especially with the autochanger, it not change the tape automaticaly).
>> > Once a
Hello.
I tested VirtualFull jobs a bit more and discovered this..
I made a job that has:
Run After Job = "/bin/echo %l"
When I ran the job as VirtualFull, %l got replaced with 'Full', not
VirtualFull.
Is it a bug or is it really supposed to be so?
--
Silver
---
On Thu, May 28, 2009 at 8:10 AM, Espen Tagestad wrote:
> Hi,
>
> When trying to run backup from a amd64 client, I get the following
> error:
>
> 28-mai 14:04 kitsvn001-dir JobId 10: No prior Full backup Job record
> found.
> 28-mai 14:04 kitsvn001-dir JobId 10: No prior or suitable Full backup
> f
James,
Thanks for your input, I figured that would have to be the route I will have
to take. When I get this done I'll paste my results to the list so that
others may benefit :-)
~Jayson
-Original Message-
From: James Harper [mailto:james.har...@bendigoit.com.au]
Sent: Thursday, May 28
First of all thank you for the answer
No I do not use compression in my file set
Options {
signature = MD5
}
I tried to upload with sftp
Uploading testfile to /tmp/terrierj
testfile 100% 83MB 41.4MB/s 00:02
There is only a problem,
I have th
Hi,
When trying to run backup from a amd64 client, I get the following
error:
28-mai 14:04 kitsvn001-dir JobId 10: No prior Full backup Job record
found.
28-mai 14:04 kitsvn001-dir JobId 10: No prior or suitable Full backup
found in catalog. Doing FULL backup.
28-mai 14:04 kitsvn001-dir JobId 10:
On Thu, May 28, 2009 at 08:27:06AM -0400, Il Neofita wrote:
> First of all thank you for the answer
> No I do not use compression in my file set
> Options {
> signature = MD5
> }
> I tried to upload with sftp
>
> Uploading testfile to /tmp/terrierj
> testfile
Bonjour,
Dans le cadre d'un projet informatique de système de backup, j'ai choisi
la solution Bacula.
Après de nombreux tests et modifications je n'arrive toujours pas à
faire le moindre backup, car je reçois toujours le même message :
/
*label
Automatically selected Storage: LTO-2
Enter new Vo
On Thu, May 28, 2009 at 5:52 AM, C.DriK wrote:
> Hello,
>
> Thank you for your reply.
> My bacula configuration is not perfect, and sometimes I have some problem
> (especially with the autochanger, it not change the tape automaticaly).
> Once a problem occurs, the "mt-f ..." no longer works and I
Hello all,
I have a question about bacula conception.
When you define a job, you have to tell in it where you will store the data,
with the keyword Storage.
Imagine you do backup on disk: on the storage conf file, you define 3
virtual drives (VirtualTapeDrive_01,VirtualTapeDrive_02,VirtualTapeDr
Well, I decided to see directly in the Catalog :
mysql> select JobId, JobStatus, PoolId, Level from Job where JobId=27110;
+---+---++---+
| JobId | JobStatus | PoolId | Level |
+---+---++---+
| 27110 | R | 34 | F |
+---+-
On Thu, May 28, 2009 at 06:01:06AM -0400, Il Neofita wrote:
> I connected the backup server and the client with a crossover cable at 1G
> however
> Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
> What can I check?
> I am using SAS disks
>
> With ethtool I have
> Speed: 1000Mb/s
> t
Hi,
there is 5Gb of data and the average speed is 9mb at sec. The speed is
slow .
Try to copy a big file from server to client (or viceversa) and se
with iptraf the speed of copy. I think there is no problem with bacula
but in the distro.
Daniele
Il giorno 28/mag/09, alle ore 12:01, I
On Thursday 28 May 2009 13:01:06 Il Neofita wrote:
> I connected the backup server and the client with a crossover cable at 1G
> however
> Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
> What can I check?
> I am using SAS disks
>
> With ethtool I have
> Speed: 1000Mb/s
> therefore
Dear all bacula users,
I get a problem on a x86_64 debian lenny with bacula SD (Version: 2.4.4 (28
December 2008)) installed by apt-get and an IBM LTO4 Ultrium4-H storage device.
I can't make backup trough my LTO but backup to a file works fine.
When I try to launch a backup on it I got the f
I connected the backup server and the client with a crossover cable at 1G
however
Files=16,251 Bytes=5,504,385,701 Bytes/sec=9,690,819 Errors=0
What can I check?
I am using SAS disks
With ethtool I have
Speed: 1000Mb/s
therefore is correct
--
Hi All
I have two jobs which are running in my DB Catalog for one client, and no jub
running in my bacula-dir for this client, I would like to correct information
in my Catalog. As I have this problem, my client can't be saved. And I don't
succeed to cancel the jobs with the cancel command in b
On Thursday 28 May 2009 11:44:48 Personal Técnico wrote:
> Hi,
>
> Our Bacula server saves data over hard disks (RAID-5). Now, we would like
to know if there is some configuration for allowing create a subfolder under
storage folder. Configuration is as follows:
>
> folder where backups sto
Hi,
Our Bacula server saves data over hard disks (RAID-5). Now, we would
like to know if there is some configuration for allowing create a
subfolder under storage folder. Configuration is as follows:
folder where backups store: /backup
all data backup is stored into /backup, creating file
Hi All
This morning I have a strange problem with my Bacula : I have a running job
which must be finished. (see with bconsole status dir)
Well I restart daemon bacula-dir, bacula-sd, then , still in bconsole, I did a
« status client=clientName » :
*status client=clientName
Connecting to Client
34 matches
Mail list logo