>
> Fatal error: Network error with FD during Backup: ERR=Keine Daten verfÃŒgbar
> Fatal error: No Job status returned from FD.
>
> ERROR in tls.c:83 TLS read/write failure.: ERR=error:1408F119:SSL
> routines:SSL3_GET_RECORD:decryption failed or bad record mac
>
You could try eliminating TLS a
On Thu, 2009-07-16 at 16:57 -0800, Bob Gamble wrote:
> Can someone explain to me how a migration or copy is generally
> supposed to work? In my mind, I would like to take a full volume,
> which has Full/Differential/Incremental backups in it and copy or
> migrate it to another storage server.
Gre
Can someone explain to me how a migration or copy is generally supposed to
work? In my mind, I would like to take a full volume, which has
Full/Differential/Incremental backups in it and copy or migrate it to
another storage server. I know the volume contains good backups and is
marked as "Full."
On Thu, Jul 16, 2009 at 11:41:53PM +0300, Slava Dubrovskiy wrote:
>
> Help me please. Always becomes full backup when should do incremental
> (JobID 9 must be Increment). FileSet did not changed. Previos job
> without error.
I tried to help on bac...@freenode but wasn't very successful.
Unfortun
Hi.
Help me please. Always becomes full backup when should do incremental
(JobID 9 must be Increment). FileSet did not changed. Previos job
without error.
Thanks for advice.
Client {
Name = ua22-fd
Address = 91.206.5.63
FDPort = 9102
Catalog = MyCatalog
Password = "w" # pas
On Thu, Jul 16, 2009 at 4:28 PM,
teksupptom wrote:
>
> Hello,
>
> We've been intermittently having an issue with backups failing due to the
> error "Spool block too big". It's happened exactly 10 times since 4/27/09. It
> generally happens during large backups (900GB+).
>
> The most recent error
Hello,
We've been intermittently having an issue with backups failing due to the error
"Spool block too big". It's happened exactly 10 times since 4/27/09. It
generally happens during large backups (900GB+).
The most recent error happened after the data had been spooled, and was being
written
I've just attempted to add a couple of Mac OSX ("Tiger" - yes, still...)
clients to our bacula system using the current 3.0.1 bacula-fd.
Backups on these clients have a couple of problems. First, there's the
fact that they both choke to death a few GB into the initial Full backup
with a series of
On Thu, Jul 16, 2009 at 08:48:03AM -0700, mradczuk wrote:
>
> I can't use multiple Device definition because of restore problems.
> That is why I will use more bacula-sd processes on one machine. This
> give me more jobs doing at the same time. I'm worried of DB
> performance now.
How about multi
Kevin Keane-2 wrote:
>
> Marcin Radczuk wrote:
>> Hi,
>>
>> I'm trying use bacula more than a month. I have more than 2000 hosts to
>> backup. We want to do everyday backup. One Full backup per month and
>> Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
>> and I was
> On Thu, 16 Jul 2009 12:09:32 +0200, Marcin Radczuk said:
>
> Hi,
>
> I'm trying use bacula more than a month. I have more than 2000 hosts to
> backup. We want to do everyday backup. One Full backup per month and
> Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
>
On Thu, Jul 16, 2009 at 05:18:24AM -0700, Kevin Keane wrote:
> Graham Keeling wrote:
> > On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
> >
> >> I'm trying use bacula more than a month. I have more than 2000 hosts to
> >> backup. We want to do everyday backup. One Full backup p
Graham Keeling wrote:
> On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
>
>> I'm trying use bacula more than a month. I have more than 2000 hosts to
>> backup. We want to do everyday backup. One Full backup per month and
>> Incrementals every other day. Now I have 10 bacula-sd
Marcin Radczuk wrote:
> Hi,
>
> I'm trying use bacula more than a month. I have more than 2000 hosts to
> backup. We want to do everyday backup. One Full backup per month and
> Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
> and I was add only 300 hosts to schedule. I
You have also to define different media type for each pool,
so a different storage definition pointing to a "unique at same time" sd device
If you have followed the default install I suspect you have only one media type
= file
mradczuk wrote:
> Thanks for the fast replay.
> I use Maximum Concu
2009/7/16 Eduardo Sieber :
> Oh My mistake...
> Thank you a lot ppl!
> I'll have to buy a better tape drive :)
>
Or more tapes. I recommend LTO drives although they are not cheap.
Remember to size the native capacity accordingly.
LTO1 - 100GB
LTO2 - 200GB
LTO3 - 400GB
LTO4 - 800GB
John
Eduardo Sieber scripsit:
> I have 2
> tape drives attached on this server (A DLT 40/80Gb and a sony SDX470V
> 40/102 GB).
>
> I've ran btape on both tapes and everything is fine on the test.
>
> So, I have a job, and the estimate command for this job says:
> 000 OK estimate files=94902 bytes=58,0
Oh My mistake...
Thank you a lot ppl!
I'll have to buy a better tape drive :)
On Thu, Jul 16, 2009 at 9:43 AM, Ralf Gross wrote:
> Eduardo Sieber schrieb:
> >
> > I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
> > i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 serv
2009/7/16 Eduardo Sieber :
> Hello people!
>
> I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
> i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
> drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
>
> I've ran btape on both
Eduardo Sieber schrieb:
>
> I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
> i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
> drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
>
> I've ran btape on both tapes and everyt
Hello people!
I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
I've ran btape on both tapes and everything is fine on the te
Thanks for the fast replay.
I use Maximum Concurrnet Jobs directive on bacula-dir.conf, storage and fd.
But the problem is that I have one pool per bckuping host:
Client {
Name = CLIENTNAME-fd
Address = CLIENTNAME.atm
FDPort = 9102
Catalog = MyCatalog
Password = "dupa"
File Retention
> Hi,
>
> I'm trying use bacula more than a month. I have more than 2000 hosts
to
> backup. We want to do everyday backup. One Full backup per month and
> Incrementals every other day. Now I have 10 bacula-sd and one
bacula-dir
> and I was add only 300 hosts to schedule. I automated hosts add to
>
On Thu, Jul 16, 2009 at 6:53 AM, Uwe Schuerkamp wrote:
> On Thu, Jul 16, 2009 at 11:45:53AM +0100, Graham Keeling wrote:
>> I had similar problems. I had to define 'Maximum Concurrent Jobs' in many
>> places to get it to work.
>>
>> Currently, I have it like this:
>> bacula-dir.conf:
>> Direc
On Thu, Jul 16, 2009 at 6:45 AM, Graham Keeling wrote:
> On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
>> I'm trying use bacula more than a month. I have more than 2000 hosts to
>> backup. We want to do everyday backup. One Full backup per month and
>> Incrementals every other day
On Thu, Jul 16, 2009 at 11:45:53AM +0100, Graham Keeling wrote:
> I had similar problems. I had to define 'Maximum Concurrent Jobs' in many
> places to get it to work.
>
> Currently, I have it like this:
> bacula-dir.conf:
> Director { Maximum Concurrent Jobs = 20; }
> Storage { Maximu
On Thu, Jul 16, 2009 at 6:46 AM, John Drescher wrote:
> On Thu, Jul 16, 2009 at 6:45 AM, Lukasz PUZON Brodowski wrote:
>>> On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowski
>>> wrote:
>>> >> In your output I am concerned about the following:
>>> >>
>>> >> > 16-Jul 04:03 serwer-news-fd JobId 1
> On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowski
> wrote:
> >> In your output I am concerned about the following:
> >>
> >> > 16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization
> key
> >> > rejected by Storage daemon.
> >> > Please see http://www.bacula.org/rel-
> >> manual/
On Thu, Jul 16, 2009 at 6:45 AM, Lukasz PUZON Brodowski wrote:
>> On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowski
>> wrote:
>> >> In your output I am concerned about the following:
>> >>
>> >> > 16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization
>> key
>> >> > rejected by Sto
On Thu, Jul 16, 2009 at 12:09:32PM +0200, Marcin Radczuk wrote:
> I'm trying use bacula more than a month. I have more than 2000 hosts to
> backup. We want to do everyday backup. One Full backup per month and
> Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
> and I was
On Thu, Jul 16, 2009 at 5:31 AM, Lukasz PUZON Brodowski wrote:
>> In your output I am concerned about the following:
>>
>> > 16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization key
>> > rejected by Storage daemon.
>> > Please see http://www.bacula.org/rel-
>> manual/faq.html#Authoriza
Hi,
I'm trying use bacula more than a month. I have more than 2000 hosts to
backup. We want to do everyday backup. One Full backup per month and
Incrementals every other day. Now I have 10 bacula-sd and one bacula-dir
and I was add only 300 hosts to schedule. I automated hosts add to
bacula-di
In your output I am concerned about the following:
> 16-Jul 04:03 serwer-news-fd JobId 146: Fatal error: Authorization key
> rejected by Storage daemon.
> Please see http://www.bacula.org/rel-manual/faq.html#AuthorizationErrors for
> help.
> 16-Jul 04:03 serwer-www-fd JobId 145: Fatal error: Autho
> -Wiadomość oryginalna-
> Od: John Drescher [mailto:dresche...@gmail.com]
> Wysłano: 16 lipca 2009 10:39
> Do: Lukasz PUZON Brodowski
> DW: bacula-users@lists.sourceforge.net
> Temat: Re: [Bacula-users] waiting for appendable volume - why?
>
> On Thu, Jul 16, 2009 at 3:39 AM, Lukasz PUZ
On Thu, Jul 16, 2009 at 3:39 AM, Lukasz PUZON Brodowski wrote:
> Hi everybody. I have problem with jobs that stop and "waiting for appendable
> volume". Eg. 2 jobs fail, because of network problems, other jobs are in
> running state, exclude one that waiting for appendable volume. And all jobs
> ar
Hi everybody. I have problem with jobs that stop and "waiting for appendable
volume". Eg. 2 jobs fail, because of network problems, other jobs are in
running state, exclude one that waiting for appendable volume. And all jobs
are waiting. Why? I use autolabel volumes (files), and for most time
ever
36 matches
Mail list logo