Yes, they are there for testing the only schedule I am using at this point is
the 'loki-schedule' for that single client which has full, diff, & incremental.
backups themselves /are/ working and I can restore from them., the config
passes the ./bacula-dir -t test without problems as well.
The
I cut and pasted four errors I received in trying to build bacula, any ideas ?:
crypto.c:1226:
error: cannot convert ‘unsigned char*’ to ‘EVP_PKEY_CTX*’ for argument
‘1’ to ‘int EVP_PKEY_decrypt(EVP_PKEY_CTX*, unsigned char*, size_t*,
const unsigned char*, size_t)’
make[1]: *** [crypto.lo] Erro
Steve,
You have incremental and differential schedules defined, however you don't
use them in the incremental and differential job blocks. In fact, you don't
call out ANY schedule in either of those jobs. What's it defaulting to? I'd
think a "bacula restart" would error out in starting bacula-dir.
> > System: Debian Squeeze (testing), Bacula V3.0.2. and MySQL v5.1.41
> >
> > At the moment I test some configurations before I want to use it on the
> > live system.
> > My question sounds pretty basic - but I couldn't find an answer in
> > several forums.
> >
> > I have configured the system to
Brian Debelius wrote:
> That would be 16M. Isn't the SD hard limited to 1M?
>
"The maximun size-in-bytes possible is 2,000,000"
From the SD Configuration Maximum block size directive.
Regards,
Richard
--
This SF.Net
On Tue, Jan 5, 2010 at 4:13 PM, Karsten Schulze wrote:
> System: Debian Squeeze (testing), Bacula V3.0.2. and MySQL v5.1.41
>
> At the moment I test some configurations before I want to use it on the
> live system.
> My question sounds pretty basic - but I couldn't find an answer in
> several foru
On Tue, Jan 5, 2010 at 3:00 PM, Phil Stracchino wrote:
> Brian Debelius wrote:
>> I'm not seeing anywhere close to 60M/s ( < 30 ). I think I just fixed
>> that. I increased the block size to 1M, and that seemed to really
>> increase the throughput, in the test I just did. I will see tomorrow,
>
System: Debian Squeeze (testing), Bacula V3.0.2. and MySQL v5.1.41
At the moment I test some configurations before I want to use it on the
live system.
My question sounds pretty basic - but I couldn't find an answer in
several forums.
I have configured the system to write the backup to disk and
That would be 16M. Isn't the SD hard limited to 1M?
On 1/5/2010 3:00 PM, Phil Stracchino wrote:
> Brian Debelius wrote:
>
>> I'm not seeing anywhere close to 60M/s (< 30 ). I think I just fixed
>> that. I increased the block size to 1M, and that seemed to really
>> increase the throughput
I am trying to compile bacula and followed the example of how to run configure.
I still received severl error messages that I have no idea what they mean. The
output is lengthy, but maybe someone has had a similar problem in the past.
Thanks.
crypto.c: In function ‘ASN1_OCTET_STRING* openssl_c
Brian Debelius wrote:
> I'm not seeing anywhere close to 60M/s ( < 30 ). I think I just fixed
> that. I increased the block size to 1M, and that seemed to really
> increase the throughput, in the test I just did. I will see tomorrow,
> when it all runs.
Yes, if you aren't already, whenever w
...and if this post is still valid, this may be the maximum speed
(55M/s) given that the LTO-3 and disk are on the same SD (even though
tape compression is on.)
http://www.mail-archive.com/bacula-de...@lists.sourceforge.net/msg01246.html
[Kern]"...so I forget the formulas for calculating this,
Incompatible? That's not good. I guess I will have to purge the tape
and run the copy jobs again.
I just set the SD Maximum Block Size to 1 megabyte, and this seems to
improved things a lot.
During a copy job, when I got a storage status from bconsole it said the
rate was 54,008,673 Bytes/sec
I'm having problems setting up a backup to a USB Drive.
The Drive is formated using Fat32 and auto mounted using fstab.
The backup destination is /mnt/usb1/backups, owner is root:users and
mode is 777.
My SD is configured like this:
Device {
Name = usb-drive-1
Media Type = File
Device Type
On Mon, 04 Jan 2010 13:38:47 -0500
Brian Debelius wrote:
> I am using batch inserts into MySQL. The database is on a different
> RAID1 volume.
>
> On 1/4/2010 1:21 PM, Richard Scobie wrote:
> > Brian Debelius wrote:
> >> Shamless bump. Does anyone have any insight into this?
> >>
> >> Thanks
I'm not seeing anywhere close to 60M/s ( < 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see tomorrow,
when it all runs.
You should not be seeing any errors.
On 1/5/2010 1:30 PM, Tino Sch
Hemant Shah wrote:
> Folks,
>
> Can someone explain the difference between the delete and purge commands?
>
> I read the manual and it seems that they both do the same thing.
No, they do not.
If you purge a volume, all the jobs on it and their associated metadata
are deleted, and the volume is
Hi,
On Tue, Jan 05, 2010 at 10:16:28AM -0800, Hemant Shah wrote:
> Can someone explain the difference between the delete and purge commands?
>
> I read the manual and it seems that they both do the same thing.
As far as I understood the manual, the purge will just remove all data
(jobs, files)
On Tue, Jan 05, 2010 at 12:48:53PM -0500, John Drescher wrote:
> > It looks like btape is not happy.
> >
> > Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
> > "Superloader-Drive" (/dev/nst0).
> >
> > Are your tapes old (still good)? Did you clean the drive? Latest Firmw
Folks,
Can someone explain the difference between the delete and purge commands?
I read the manual and it seems that they both do the same thing.
Thanks.
Hemant Shah
E-mail: hj...@yahoo.com
--
This SF.Net em
Hrumph...sigh.
On 1/5/2010 12:40 PM, Phil Stracchino wrote:
> Brian Debelius wrote:
>
>> I want to see if having disk storage on one sd process, and tape storage
>> on another sd process, would increase throughput during copy jobs.
>>
> Actually, it'll decrease it rather drastically. Al
> Can I do the following:
>
> Set Job and File retention to 2 years in Client Option, and then in Pool
> option set the Volume retention for tape pool to 2 years and disk pool to 4
> months.
>
> When I prune/purge a Volume will bacula remove the associated file and job
> records from the databas
On Tue, Jan 5, 2010 at 12:37 PM, Brian Debelius
wrote:
> It looks like btape is not happy.
>
> Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
> "Superloader-Drive" (/dev/nst0).
>
> Are your tapes old (still good)? Did you clean the drive? Latest Firmware?
>
I would add
--- On Tue, 1/5/10, Timo Neuvonen wrote:
> From: Timo Neuvonen
> Subject: Re: [Bacula-users] Help with retention period
> To: bacula-users@lists.sourceforge.net
> Date: Tuesday, January 5, 2010, 1:03 AM
> "Hemant Shah"
> kirjoitti viestissä
> news:897496.89613...@web51606.mail.re2.yahoo.com..
Brian Debelius wrote:
> I want to see if having disk storage on one sd process, and tape storage
> on another sd process, would increase throughput during copy jobs.
Actually, it'll decrease it rather drastically. All the way to none.
You see, at the present time you cannot copy or migrate from
Doh! I guess I will find out in a minute or two.
On 1/5/2010 12:29 PM, John Drescher wrote:
> On Tue, Jan 5, 2010 at 12:15 PM, Brian Debelius
> wrote:
>
>> I want to see if having disk storage on one sd process, and tape storage
>> on another sd process, would increase throughput during cop
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
"Superloader-Drive" (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest Firmware?
On 1/5/2010 9:06 AM, Tino Schwarze wrote:
> Hi there,
>
> I'm struggling with my
On Tue, Jan 5, 2010 at 12:15 PM, Brian Debelius
wrote:
> I want to see if having disk storage on one sd process, and tape storage
> on another sd process, would increase throughput during copy jobs.
>
I thought migration and copy jobs only work for a single SD. This may
have changed since I have
I want to see if having disk storage on one sd process, and tape storage
on another sd process, would increase throughput during copy jobs.
On 1/5/2010 11:48 AM, Tino Schwarze wrote:
> On Tue, Jan 05, 2010 at 11:23:01AM -0500, Brian Debelius wrote:
>
>
>> Can I have one director and two SD's
>
> Hey,
>
> Can I have one director and two SD's on the same box?
>
May be wrong, but as I understand it, as long as they (multiple SD's) have
differing names (eg hostname-sd1 & hostname-sd2), differing ports and access
different logical devices, I can't see why not.
Although, am not sure w
On Tue, Jan 05, 2010 at 11:23:01AM -0500, Brian Debelius wrote:
> Can I have one director and two SD's on the same box?
Why do you need two SDs?
Tino.
--
"What we nourish flourishes." - "Was wir nähren erblüht."
www.lichtkreis-chemnitz.de
www.tisc.de
-
On Tue, Jan 5, 2010 at 11:23 AM, Brian Debelius
wrote:
> Hey,
>
> Can I have one director and two SD's on the same box?
>
I believe that will be fine as long as you give them different ports.
You will have to create your own init script to start the other sd
with a different port and config.
Joh
Hey,
Can I have one director and two SD's on the same box?
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to
--- On Tue, 1/5/10, Timo Neuvonen wrote:
> From: Timo Neuvonen
> Subject: Re: [Bacula-users] Help with retention period
> To: bacula-users@lists.sourceforge.net
> Date: Tuesday, January 5, 2010, 1:03 AM
> "Hemant Shah"
> kirjoitti viestissä
> news:897496.89613...@web51606.mail.re2.yahoo.com
Hi there,
I'm struggling with my LTO3 autochanger (Quantum Superloader3). We're
using HP tapes of 400/800 GB capacity (uncompressed/compressed).
Everything has been running fine for about 3 years now (OS: OpenSuSE
10.2, package bacula-postgresql-2.2.5-1), but we're starting to really
fill our tape
Anton Albajes-Eizagirre wrote:
> I set the test pool (on test tape added to it) as:
>
> # pool for tests
> Pool {
> Name = pooltest
> Pool Type = Backup
> AutoPrune = yes
> Recycle = yes
> Volume retention = 1 minute
> # Use Volume Once = yes
> # Volume Retention = 365 days
> # Maximu
When you find this error, you can access the mount point ad find the device?
I don't know the right answere, but I think that you can try to use
RunBeforeJob and the RunAfterJob to mount/umount your volume.
In this way you can call a script that you are sure works well
CIAO
Carlo
2009/12/22 Ol
Thanks John and the others for the reply,
On Mon, 2010-01-04 at 13:06 -0500, John Drescher wrote:
> > I'm trying to set a backup system where i'd use a single tape to store a
> > full backup on a single job.
> >
> I would never recommend such a policy because while you are backing up
> the data ea
>
> Looks like is it - yes. Unmounting does not work.
> Mounting works by the way - This was PEBKAC (forgot to restart bacula-
> dir)
>
>
> --
> Oliver Lehmann
Had a similar issue using Celerra sourced NFS mounts on my Bacula box. I ended
up simply using autofs/automount to get around it and
Good..
I have read it on the documentation some time ago, but I don't do nothing!!
:)
Bye
Carlo
2010/1/5 James Harper
> >
> > Hi,
> > I have a Windows 2008 Server 64bit, with Exchange 2007.
> >
> > I found this error on backups:
> >
> > --
> > 05-Jan 11:26 sic-s
On Tue, Jan 5, 2010 at 12:18 PM, John Drescher wrote:
> On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso wrote:
>> Hi people,
>>
>> First, I'm using an old bacula version (etch version 1.38.11-8), so I
>> now this is a 2006 question :(
...
>> # mtx -f /dev/autochanger1 load 4 0
>> * mount
>> * stat
>
> Hi,
> I have a Windows 2008 Server 64bit, with Exchange 2007.
>
> I found this error on backups:
>
> --
> 05-Jan 11:26 sic-sbs-fd JobId 176: Preparing Exchange Backup for
SIC-SBS
> 05-Jan 11:26 sic-sbs-fd JobId 176: Error: HrESEBackupSetup failed with
error
>
On Tue, Jan 5, 2010 at 4:26 AM, Javier Barroso wrote:
> Hi people,
>
> First, I'm using an old bacula version (etch version 1.38.11-8), so I
> now this is a 2006 question :(
>
> I have a problem, I searched in this list and your bugtracker, but I
> couldn't find any response which solve my issue.
Hi,
I have a Windows 2008 Server 64bit, with Exchange 2007.
I found this error on backups:
--
05-Jan 11:26 sic-sbs-fd JobId 176: Preparing Exchange Backup for
SIC-SBS
05-Jan 11:26 sic-sbs-fd JobId 176: Error: HrESEBackupSetup failed with error
0xc800020e - Unknown
Hi all,
I don't know why but if I run the BackupCatalog job manually all goes well..
If I let it go after all the daily backup it crash, without any log
How I can debug it?
CIAO,
Carlo
--
This SF.Net email is sponsored b
Hi people,
First, I'm using an old bacula version (etch version 1.38.11-8), so I
now this is a 2006 question :(
I have a problem, I searched in this list and your bugtracker, but I
couldn't find any response which solve my issue.
Today, after recover from an autochanger issue (see thread about
s
On Fri, Nov 6, 2009 at 6:23 PM, Alan Brown wrote:
> On Fri, 6 Nov 2009, Javier Barroso wrote:
>
>> I'm tracking the problem with HP:
>> http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1384395
>>
>> I think could be this issue:
>> http://h2.www2.hp.com/bizsupport/TechSupport
47 matches
Mail list logo