Re: [Bacula-users] Want to run dbcheck but can't because of database size

2007-02-07 Thread James Cort
Juan Luis Frances wrote:
> What version of bacula?
>
> I had the same problems with 1.38.5.  I attach my "dirty" patch.
>
>   
Should have mentioned: it's 1.38.9 and running a Postgres backend.

Your patch should work though, as dbcheck.c hasn't changed between those 
two versions.  I'll give it a go.


Many thanks,

James Cort
-- 
U4EA Technologies
http://www.u4eatech.com


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Want to run dbcheck but can't because of database size

2007-02-12 Thread James Cort
Juan Luis Frances wrote:
> Now the correct patch, sorry ;-)
>
> El Wednesday 07 February 2007 12:29:40 Juan Luis Frances escribió:
>   
>> What version of bacula?
>>
>> I had the same problems with 1.38.5.  I attach my "dirty" patch.
>> 

Many thanks for your patch.  I compiled it, and it's been running 
against my database since last Thursday.

It's still going.  I've long since lost count of how many records it's 
cleared out, but the database backup is slowly shrinking.


-- 
U4EA Technologies
http://www.u4eatech.com


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Can't get migrating jobs across storage daemons to work

2007-03-23 Thread James Cort
Hi,

I have two tape drives.  One is a DLT-V4, the other is Dell Powervault 
124T with an LTO-3 drive.

I want to migrate everything from the DLT-V4 to the Dell.  Ultimately, I 
want to control two SDs (for reasons which aren't particularly relevant 
to this discussion) from bacula, so I'm trying that as well.

I don't know if this is by design, so please forgive me if this is a 
silly question.  Is it intended to be impossible to migrate jobs between 
storage daemons running on two different systems?

The reason I ask is I've got my migration jobs set up and two storage 
stanzas in my bacula-dir.conf, thus:

Storage {
  Name = Tape
  Address = gemini.sw.u4eatech.com
  SDPort = 9103
  Password = "k/4vLxXTDmep20nzgbOPOmPBRHLIXMxwyGm/Vm66yvHk"
  Device = "PowerVault-124T"
  Media Type = LTO-Tape
  Maximum Concurrent Jobs = 20
}

Storage {
  Name = "DLT-Tape"
  Address = restore.u4eatech.com
  SDPort = 9103
  Password = "3b127404d9db1ae9d5916138508bb3834775a45e"
  Device = "DLT-160"
  Media Type = "DLT-Tape"
  Maximum Concurrent Jobs = 20
}

Yet the migration fails with the following error:

23-Mar 11:28 gemini-sd: Failed command:
23-Mar 11:28 gemini-sd: migrate-annuals.2007-03-23_11.28.42 Fatal error:
 Device "DLT-160" with MediaType "DLT-Tape" requested by DIR not 
found in SD Device resources.
23-Mar 11:28 gemini-dir: migrate-annuals.2007-03-23_11.28.42 Fatal error:
 Storage daemon didn't accept Device "DLT-160" because:
 3924 Device "DLT-160" not in SD Device resources.


I note that the error is coming from the storage daemon on gemini, yet 
the director should be going to the storage director on 
restore.u4eatech.com.

-- 
U4EA Technologies
http://www.u4eatech.com


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't get migrating jobs across storage daemons to work

2007-03-23 Thread James Cort

Kern Sibbald wrote:

Migrating across Storage daemons is not implemented.
  

Thought as much.

For the record, in case anyone else brings it up (and to make sure 
there's a suggestion in the list archives) I'm working around it as 
follows:


* Setting up another pool which is disk-based going to files on an 
NFS-mounted filesystem.

* Setting up *both* storage daemons to use this disk.
* Migrating from the DLT-v4 drive to the pool on the NFS mounted 
filesystem.

* Unmounting the NFS filesystem on the DLT-v4 SD
* Mounting it on the PowerVault SD
* Migrating from the pool on the NFS-mounted filesystem to LTO-3 tapes 
on the Powervault.


Hopefully this will work.  I'm ensuring data integrity by not mounting 
the NFS filesystem on both SDs at the same time.  I have no idea what 
would happen if both were to try accessing the same volume 
simultaneously and I have no wish to find out.  Seeing as I only have 
about 10 volumes to be migrated, this shouldn't be too laborious.


--
U4EA Technologies
http://www.u4eatech.com

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can't get migrating jobs across storage daemons to work

2007-03-23 Thread James Cort
Kern Sibbald wrote:
> If Bacula is writing to Volumes mounted on an NFS filesystem, you will 
> probably take an *enormous* performance hit compared to a local disk.
>
>   
Thanks for letting me know, but considering it's coming off a DLT-v4 
drive (max. speed: 10MB/sec, typical speed about half that) with an SD 
that has nothing better to, I'm not too concerned.  The SD with the much 
faster LTO drive has a gigabit ethernet connection to the NFS server.
-- 
U4EA Technologies
http://www.u4eatech.com


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula recycling volumes before their time

2007-04-02 Thread James Cort
Hi,

I'm using bacula 2.0.3 and I'm having a problem with Bacula recycling 
volumes before I want it to.  This problem has been ongoing since 
version 1.38.something so it's either a very longstanding bug or my 
configuration is broken.

What equipment I have to make sure this all works:

A Dell PowerVault 124T tape autochanger.  Holds 16 tapes, has a barcode 
reader and seems to work perfectly with Bacula.

A scratch pool containing as-yet unpooled tapes.  This scratch pool has 
never had less than 3 tapes in it.  Each day's backup requires 1 tape.


What I want to happen:

Three pools of tapes, Daily, Monthly and Annual.  Daily are recycled 
once a month, Monthly are recycled once a year and Annually are never 
recycled (well, actually they're recycled every 20 years but in the real 
world that means never). 

I don't want to purge any records for tapes which aren't due for 
recycling.  As long as the tape itself hasn't been recycled, I want to 
be able to restore any file from it without resorting to bscan.  I've 
worked out how much space this will require database-wise and I'm 
confident that won't be a problem.

I've had the tape changer less than a year, so in practise I don't 
expect to see any monthly tapes coming up for recycling for a while 
yet.  Instead, I would expect tapes to be taken from the Scratch pool.


What I see happening:

Daily tapes are recycled as necessary at the end of the month.  This is 
fine.
Monthly tapes are being recycled after 2 months, and the Scratch pool is 
ignored.  The backup for 1 April was paused (until I cleared it this 
morning) with the message below:

01-Apr 00:05 gemini-sd: Please mount Volume "28L3" on Storage Device "LTO3" 
(/dev/nst0) for Job atlas.2007-04-01_00.05.00


Volume 28L3 was used on 1 February 2007, so shouldn't be coming up 
for recycling until next February. 


How things are currently configured:

My current configuration has the same File and Job retention for every 
job, regardless of which pool they're going to.  The configuration for 
each job is:


  File Retention = 90 days
  Job Retention = 1 month

I'm sure this is wrong, but I'm not sure what it should be.  I want 
every file and job record purged when the volume itself is up for 
recycling and not before.

Pool configurations are:

# Tape pool

Pool {
  Name = DailyTape
  Pool Type = Backup
  Recycle = Yes
  AutoPrune = yes
  Recycle Current Volume = yes
  Volume Retention = 27 days
  Volume Use Duration = 23 hours
}

Pool {
  Name = MonthlyTape
  Pool Type = Backup
  Recycle = Yes
  AutoPrune = Yes
  Volume Retention = 360 days
  Recycle Current Volume = No
  Volume Use Duration = 23 hours
  Storage = Tape
}

Pool {
  Name = AnnualTape
  Pool Type = Backup
  Recycle = No
  AutoPrune = No
  Storage = Tape
  Volume Retention = 20 years
  Recycle Current Volume = No
  Volume use duration = 23 hours
}


The jobs which write to tape are configured thus:

Schedule {
  Name = "WeeklyCycle"
  Run = Level=Full Pool=AnnualTape on 1 jan at 00:05
  Run = Level=Full Pool=MonthlyTape on 1 feb-dec at 00:05
  Run = Level=Full Pool=DailyTape on 2-31 at 00:05
}

Schedule {
  Name = "AmericanWeeklyCycle"
  Run = Level=Full Pool=AnnualTape on 1 jan at 07:05
  Run = Level=Full Pool=MonthlyTape on 1 feb-dec at 07:05
  Run = Level=Full Pool=DailyTape on 2-31 at 07:05
}

Schedule {
  Name = "WeeklyCycleAfterBackup"
  Run = Level=Full Pool=AnnualTape on 1 jan at 07:05
  Run = Level=Full Pool=MonthlyTape on 1 feb-dec at 07:05
  Run = Level=Full Pool=DailyTape on 2-31 at 07:05
}


-- 
U4EA Technologies
http://www.u4eatech.com


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula recycling volumes before their time

2007-04-03 Thread James Cort

Ryan Novosielski wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

James Cort wrote:
 
I don't want to purge any records for tapes which aren't due for 
recycling.  As long as the tape itself hasn't been recycled, I want 
to be able to restore any file from it without resorting to bscan.  
I've worked out how much space this will require database-wise and 
I'm confident that won't be a problem.



If so, just out of curiosity, how did you arrive at your file and job
retention times?
  


Originally it was part of a policy which predated me starting at this 
company.  I've changed the backup mechanism (used to be a bunch of shell 
scripts which I wasn't entirely happy with) but I've retained the policy 
as it has proven somewhat useful on a number of occasions now.


My current configuration has the same File and Job retention for 
every job, regardless of which pool they're going to.  The 
configuration for each job is:



  File Retention = 90 days
  Job Retention = 1 month

I'm sure this is wrong, but I'm not sure what it should be.  I want 
every file and job record purged when the volume itself is up for 
recycling and not before.


If I'm not mistaken, if you prune the job, it will take the files with
it (as there will be nothing to restore from).
I think you may be.  Certainly dbcheck takes ages as I appear to have 
many millions of orphan file entries.  (This is another thing I'm 
working on, it would be nice to kill two birds with one stone)



 If you don't want
anything special to happen here, seems to me you'd want to set volume
retention times and pull these two directives out of the file as I
really don't think you need either one. What you have now says "prune
the listing of the individual files at 90 days but prune the jobs that
contain them at 1 month." As a result, I'd expect the behavior to be
"everything goes at 1 mo.
I'll give that a go.  I think there will still be some sort of retention 
applied as the default File retention period is 60 days, and the job 
retention period 180 days as per the manual.
It will be interesting to see if the volumes in the Daily pool are still 
recycled when I'd expect with a rather shorter Volume retention than 
File/Job retention period.



--
U4EA Technologies
http://www.u4eatech.com

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula recycling volumes before their time

2007-04-03 Thread James Cort

Arno Lehmann wrote:

Hi,

On 4/3/2007 11:31 AM, James Cort wrote:
  

Ryan Novosielski wrote:


If I'm not mistaken, if you prune the job, it will take the files with
it (as there will be nothing to restore from).
  

I think you may be.



I think Ryan is right :-)

You can find an explanation of the recycling algorithm in the manual: 
Look at the sections explaining the Retention settings in the DIR 
configuration.
  
Yes.  As far as I can tell, I should never have had the problem I did in 
the first place:


   Note, even if all the File and Job records are pruned from a Volume, 
the Volume will not be marked Purged until the Volume retention period 
expires.


Given my original configuration, I'd have expected a bunch of volumes 
which, while not recycled, were effectively "in limbo" as there would 
have been no jobs or file records referencing them but they'd still have 
been stuck on their volume retention.  Therefore, I would have expected 
Bacula to take a tape from the scratch pool.  Particularly as the volume 
it sat around waiting for wasn't in the autochanger.


Unless the volume retention information wasn't updated when the volumes 
were moved to their relevant pool from the Scratch pool.


What you want sounds like setting the File and Job Retention to a longer 
time than the Volume Retention period.


I prefer a Job Retention of 30 years in such a case, and limiting the 
Volume retention to the actual time you want to keep your data.
  


AIUI, if, like me, you want to keep all the file/job data until the 
volume itself is recycled, you'd suggest setting file and job retention 
periods to be longer than the longest volume retention period.


I've done that.  Guess all I can do now is wait and see a few months.
It will be interesting to see if the volumes in the Daily pool are still 
recycled when I'd expect with a rather shorter Volume retention than 
File/Job retention period.



Keep in mind that you have to update the existing volumes to reflect a 
modified configuration!
  
Volumes, yes.  But the volume retention period hasn't changed - it's the 
file/job retention periods that are changing.  I didn't think they 
needed to be updated.


--
U4EA Technologies
http://www.u4eatech.com

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Can I limit how deep into the directory tree file records are kept?

2006-12-14 Thread James Cort
Hi,

A bit of background: I backup about a dozen servers with Bacula, have 
been doing so since about April, and I recycle tapes every month (except 
end of month tapes, which are kept for a year).  My database is now 
around 17GB and growing at a rate of about 2GB/month.

I'm pretty certain my mailserver is the main culprit for this.  We store 
email in Unix maildir format, which means that every email which is 
received takes up a file record.

I'm never likely to try restoring a single email by its filename in a 
maildir, so there's really no point in storing this level of detail.   
Far more useful would be to record just directories, so I can restore 
entire directories but nothing more granular.

I could write a prebackup script which tars all the directories in /home 
into individual tar files, then just back those up. But this requires a 
lot of extra disk space on the mailserver, which I don't have.  And with 
a 46GB mail store, I'd like to avoid adding extra processing.

I can't find mention of such a feature in the docs, does it exist?


James Cort

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free

2006-12-15 Thread James Cort
Nick Jones wrote:
> Does anyone know why or how my tape7 could have been marked full even 
> though it is the same capacity as the other tapes and is not full 
> according to bacula.  Now it is appending to tape4.  Also, it is hard 
> to read but it says 0 for 'in changer' for tapes 1,2, and 4 which are 
> all in the changer and were listed as such at one time.  All I did was 
> run a backup.  How did these values become changed?  How can I tell 
> bacula that tape7 is not full (it's pretty obvious it's not full 
> according to bacula's records).  Also is it a problem that the tapes 
> are no longer 'in changer' ?
>
> Thanks alot for help or suggestions
I've had exactly that happen when the tape media reports an error.  I 
can't remember exactly where it's written, but IIRC the documentation 
says something to the effect that it's not always possible to determine 
the difference between a "tape full" message and an error on the tape.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume use duration

2007-01-08 Thread James Cort
Bill Moran wrote:
> In response to Amiche <[EMAIL PROTECTED]>:
>
>   
>> Whan bacula start the backup of the night on a tape volume, I want the 
>> status of
>> the volume pass immediatly as Used when the last job is terminated.
>> So I defined  "Volume Use Duration = 10h" but Bacula wait for the next job 
>> using
>> this volume to mark it as Used,and not immediatly after the last job. I need 
>> to
>> have the check "Max configured use duration exceeded" just after my last job.
>> Because my jobs start at 4am, and the status is marked as used because Max
>> configured use duration exceeded, and bacula wait for a new volume until an
>> operator come and change it in the morning.
>> Is it possible to check the Volume use duration on a volume without starting 
>> a
>> new job on this volume?
>> 
>
> Configure an admin job.  See:
> http://bacula.org/rel-manual/Configuring_Director.html#JobResource
>   

Does that work with versions prior to 2.0? 

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Want to run dbcheck but can't because of database size

2007-02-07 Thread James Cort
Dear All,

My database has grown in size to the point where it's now 18GB.

I'd like to see if that can be pruned at all, but unfortunately my box 
only has 1GB of RAM and dbcheck fails rather horribly with an out of 
memory error when doing check for orphaned file records:

Checking for orphaned File entries. This may take some time!
Query failed: SELECT File.FileId,Job.JobId FROM File LEFT OUTER JOIN Job 
ON (File.JobId=Job.JobId) WHERE Job.JobId IS NULL: ERR=out of memory for 
query result

Even with 20GB of swap this doesn't help in the slightest.  I notice 
that it doesn't use anywhere near that amount of swap before running out 
of memory, though - perhaps the per-process memory limit is the problem?

Has anyone encountered (and more importantly solved) a similar problem 
in the past?


James Cort
-- 
U4EA Technologies
http://www.u4eatech.com


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Automatic labelling w/o autochanger doesn't

2007-07-31 Thread James Cort
(re-sent because I sent it to the -bounces address - oops)

Hi,

I want to set up Bacula to automatically label any blank volumes.  The 
volumes in question are DLT tapes, using a plain tape drive (no 
autochanger).

The director's configuration is slightly complicated by the fact that 
there are two SDs in my configuration which are geographically separate 
(the bandwidth available over SCSI is substantially higher than the 
bandwidth we can afford to lay on between two countries).  Nevertheless, 
I think I've got that part of the problem cracked, because if I use a 
pre-labelled tape, I can run backups as intended.

The SD which needs to have its tapes autolabelled has the following 
Device stanza:


Device {
 Name = "DLT-160"
 Media Type = DLT-Tape
 Archive Device = /dev/nst0
 LabelMedia = yes;   # lets Bacula label unlabeled media
 AutomaticMount = yes;   # when device opened, read it
}

The pool which refers to this device is:

Pool {
 Name = DailyTape-DLT
 Pool Type = Backup
 Recycle = Yes
 AutoPrune = yes
 Storage = DLT-Tape
 LabelFormat = "Daily-DLT-"
 Recycle Current Volume = yes
}

I understand that I must attempt to mount an unlabelled tape before 
Bacula will label it.  I've done this, and then I kick off a backup.  I 
would have expected Bacula to realise that there are no appropriate 
tapes available BUT there's a blank one mounted, and to label it 
appropriately.

Instead, however, I get this:

*
*mes
31-Jul 16:14 gemini-dir: Start Backup JobId 4719, 
Job=dns-france.france.u4eatech.com.2007-07-31_16.14.38
31-Jul 16:14 gemini-dir: Created new Volume "Daily-DLT-0006" in catalog.
31-Jul 16:14 taranis-sd: Please mount Volume "Daily-DLT-0006" on Storage 
Device "DLT-160" (/dev/nst0) for Job 
dns-france.france.u4eatech.com.2007-07-31_16.14.38

- and it'll sit there indefinitely waiting for me to mount a volume 
which it's created in the catalog but not labelled.

The director is running 2.0.3 (gentoo build), the SD is running 2.0.3-4 
(debian etch) - is it possible that there's an incompatibility between 
these versions or have I missed something?

Many thanks,

James Cort
-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic labelling w/o autochanger doesn't

2007-08-01 Thread James Cort
John Drescher wrote:
>> Instead, however, I get this:
>>
>> - and it'll sit there indefinitely waiting for me to mount a volume
>> which it's created in the catalog but not labelled.
>> 
> Type mount from the console, the actual labeling will happen after the
> drive mounts and it detects it has a blank tape.. I do not believe you
> can mount a blank tape.
Apologies for wasting time - it seems the tapes I got back from the bulk 
erasure service weren't very erased, and the tape drive had decided it 
did not wish to write to them.
-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Recycling on an annual basis not working

2006-05-30 Thread James Cort
Hi,

I have 3 pools in my Bacula configuration:

DailyTape : Recycled once a month
MonthlyTape : Recycled once a year
AnnualTape : Recycled once every 20 years.

However, Bacula has decided that it wants to recycle last month's
monthly tape.  Could anyone suggest why this might be?

The various stanzas in bacula-dir.conf read:

Pool {
  Name = DailyTape
  Pool Type = Backup
  Recycle = Yes
  AutoPrune = yes
  Recycle Current Volume = yes
  Volume Retention = 27 days
  Accept Any Volume = yes
  Volume Use Duration = 20 hours
}

Pool {
  Name = MonthlyTape
  Pool Type = Backup
  Recycle = Yes
  AutoPrune = Yes
  Recycle Current Volume = yes
  Volume Retention = 12 months
  Accept Any Volume = yes
  Volume Use Duration = 20 hours
}

Pool {
  Name = AnnualTape
  Pool Type = Backup
  Recycle = No
  AutoPrune = No
  Volume Retention = 20 years  
  Accept Any Volume = yes
  Volume use duration = 20 hours
}


Pool: MonthlyTape
+-++---+--+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | volbytes | volfiles | volretention
| recycle | slot | inchanger | mediatype | lastwritten |
+-++---+--+--+--+-+--+---+---+-+
|  13 | April  | Recycle   |1 |0 |   31,104,000
|   1 |0 | 1 | Tape  | 2006-04-28 17:07:56 |
+-++---+--+--+--+-+--+---+---+-+




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Re-read of last block failed : Is my understanding correct?

2006-06-02 Thread James Cort
Hi,

I've had the same problem with a few tapes now:


31-May 03:35 gemini-sd: Committing spooled data to Volume. Despooling
36,455,289,794 bytes ...
31-May 03:35 gemini-sd: cygnus_new.2006-05-31_00.05.02 Error:
block.c:552 Write error at 105:4237 on device /dev/nst0. ERR=Device or
resource busy.
31-May 03:35 gemini-sd: cygnus_new.2006-05-31_00.05.02 Error: Re-read of
last block failed. Last block=417630 Current block=0.
31-May 03:35 gemini-sd: End of medium on Volume "May"
Bytes=105,266,284,744 Blocks=1,631,737 at 31-May-2006 03:35.


02-Jun 02:32 gemini-sd: Committing spooled data to Volume. Despooling
29,049,953,244 bytes ...
02-Jun 03:30 gemini-sd: Sending spooled attrs to the Director.
Despooling 291,789,632 bytes ...
02-Jun 03:32 gemini-sd: Committing spooled data to Volume. Despooling
22,536,808,966 bytes ...
02-Jun 03:32 gemini-sd: cygnus_new.2006-06-02_00.05.02 Error:
block.c:552 Write error at 120:14021 on device /dev/nst0. ERR=Device or
resource busy.
02-Jun 03:33 gemini-sd: cygnus_new.2006-06-02_00.05.02 Error: Re-read of
last block failed. Last block=450220 Current block=0.
02-Jun 03:33 gemini-sd: End of medium on Volume "May_2"
Bytes=120,896,497,919 Blocks=1,874,021 at 02-Jun-2006 03:33.
02-Jun 03:35 gemini-sd: Please mount Volume "28" on Storage Device
"DLT-160" for Job cygnus_new.2006-06-02_00.05.02


The tapes I'm using have an uncompressed capacity of 160GB (DLTv4), so I
know that it hasn't really reached the end of the tape.

Am I right in thinking these errors are indicative of faulty tapes?


James.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=Device or resource busy

2006-06-05 Thread James Cort
Kern Sibbald wrote:
> I recommend the following things (obviously 2-4 are unnecessary if 1 fixes 
> the 
> problem):
>
> 1. This still looks most like a kernel driver problem to me.  Backing up to 
> kernel-2.6.16-1.2111 would most likely clear up this point.
>   

I've had a few kernel driver problems with this system in particular:

1.  One day a few months ago, for no apparent reason, none of the
kernels on the box would talk to the SCSI controller - even though it
had been working fine for ages before then.  An upgrade to a newer
kernel fixed the issue.

2.  This box hasn't been rebooted lately - and is therefore still using
that known-good kernel  (2.6.16.1  vanilla).  Was fine until the last
few days. 

Are there any known issues with LSI Logic SCSI cards?

The thing I do notice is that this always happens around 130-150 GB into
the tape:

+-++---+-+--+--+-+--+---+---+-+
| mediaid | volumename | volstatus | volbytes| volfiles |
volretention | recycle | slot | inchanger | mediatype |
lastwritten |
+-++---+-+--+--+-+--+---+---+-+
| 33 | 28 | Full  | 148,190,480,717 |  148 |   
2,332,800 |   1 |0 | 1 | Tape  | 2006-06-05 14:15:35 |


It's a Quantum DLTv4 drive which is only physically capable of writing
to 160/320GB tapes so there's no earthly way it's filled up that tape.

Most of the tapes it hasn't liked have thus far been recycled tapes
which were previously written at 80/160GB, but have since been passed
through a bulk eraser.

> 2. Rebuild and reinstall Bacula (in case there are some library changes).
>   
Haven't put anything new on there lately regarding Bacula.
> 3. Clean your tape drive.
>
>   
It's about 6 weeks old.
> 4. Mark the current Volume as Used and try a different one.

Done that with 4 volumes now.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=Device or resource busy

2006-06-06 Thread James Cort
Kern Sibbald wrote:
> Unfortunately, Bacula is sufficiently demanding that it often brings out 
> driver problems that don't show up using most Unix tape utilities, which tend 
> to be rather "simple" minded. They either simply write() or read(). Bacula 
> uses quite a lot more features of the drive.
>   

That's very interesting.

I've lost a certain degree of faith in the SCSI card I'm using; it's not
particularly common so there's every possibility the driver hasn't had
as much exercise as some of the more common SCSI card drivers.

I'm wondering if it's worth replacing the SCSI card with something a
little more commonplace - I'm thinking a reasonably sensible Adaptec
card right now, maybe using the aic7xxx driver - has anyone had any
experience of these?



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tapes ending before they're full

2006-06-22 Thread James Cort
A few weeks ago I posted to this list with a problem concerning my tapes
encountering errors around the 100G mark which resulted in bacula ending
them and moving onto another tape.

As suggested, I swapped the SCSI card for an Adaptec unit, however the
problem persists:

22-Jun 12:16 gemini-sd: Sending spooled attrs to the Director.
Despooling 217,482,871 bytes ...
22-Jun 12:42 gemini-sd: Committing spooled data to Volume. Despooling
18,631,986,113 bytes ...
22-Jun 12:42 gemini-sd: cygnus_new.2006-06-22_08.58.01 Error:
block.c:552 Write error at 104:2205 on device /dev/nst0. ERR=Device or
resource busy.
22-Jun 12:43 gemini-sd: cygnus_new.2006-06-22_08.58.01 Error: Re-read of
last block failed. Last block=125230 Current block=0.
22-Jun 12:43 gemini-sd: End of medium on Volume "10"
Bytes=104,135,520,221 Blocks=1,614,205 at 22-Jun-2006 12:43.
22-Jun 12:45 gemini-dir: Recycled volume "11"
22-Jun 12:45 gemini-sd: Please mount Volume "11" on Storage Device
"DLT-160" for Job cygnus_new.2006-06-22_08.58.01

This does not happen with all tapes.

As before, the tapes are 160/320G DLTv4 tapes.  Many were previously
used as DLT-VS160 (capacity: 80/160G), these have been bulk erased so
should be good to write on - though unfortunately I don't have any means
of finding out which ones these are as they're all mixed in with tapes
which have never been used before.

The tape drive (a Quantum DLTv4) cannot and will not write onto tapes
which were previously written as DLT-VS160's unless they're bulk-erased
first.

Bacula version 1.36.3-r2, director and storage daemon both on the same box.

Any suggestions or should I just cycle out any tapes which give this error?

James.


All the advantages of Linux Managed Hosting--Without the Cost and Risk!
Fully trained technicians. The highest number of Red Hat certifications in
the hosting industry. Fanatical Support. Click to learn more
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=107521&bid=248729&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Mixing 1.38.9 SD/director with 1.36 FD

2006-07-10 Thread James Cort
Hi,

I know the release notes for 1.38 say "upgrade everything", but I
thought I'd ask on the off-chance.

Has anyone successfully used a 1.36 FD in conjunction with a 1.38.9
SD/director?  My own limited testing suggests that it works OK, but I
was wondering if anyone else has tried the same.


James


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mucking with autochanger

2008-10-16 Thread James Cort
Craig White wrote:
> 
> so it only shows the 8 slots in the left tray but not the 8 slots in the
> right tray. I expected to be able to use both trays.

I have the exact same unit.  You need to hit the web interface on the
unit and tell it that both magazines are fitted or only 8 slots will
appear to Bacula.

Go to the web interface and it's under the "Configuration" menu, second
option from the bottom on mine.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Fwd: 64bit for File.FileId

2008-11-07 Thread James Cort
I had hoped to avoid upgrading to 2.4 because I've got a lot of FDs
using much older versions of baculabut if I must I must

> PS: I would be interested to hear about your setup since you seem to have 
> quite big backup needs.

About 20 servers, several million files and around 8-900GB per night.
The big killer is the mail server which is courier-imap with a maildir
backend - every email is a file and they get fairly unique names.

I think I've hit this problem not because of the size of the database
but because of how Postgres works - serial numbers aren't reused and
looking at the postgres dump, a dump/reload probably doesn't regenerate
them from zero.  Well, I doubt it will because the serial numbers are
present in the SQL dump.  The table itself doesn't have anything like
2^31 entries:

bacula=# select count(fileid) from file;
   count
---
 174610251
(1 row)


I've been using Bacula for almost three years so that's 3 years worth of
fileids there.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Ran out of fileid's in postgres-backed database

2008-11-07 Thread James Cort
I appear to have run out of fileids in my bacula database.

I'm using bacula 2.2.8-8~bpo40+1  (the Debian Etch backported package)
and my backups have failed with the error:

Fatal error: Can't fill File table Query failed: INSERT INTO File
(FileIndex, JobId, PathId, FilenameId, LStat, MD5)SELECT
batch.FileIndex, batch.JobId, Path.PathId,
Filename.FilenameId,batch.LStat, batch.MD5 FROM batch JOIN Path ON
(batch.Path = Path.Path) JOIN Filename ON (batch.Name = Filename.Name):
ERR=ERROR:  integer out of range


I'm using Postgres as the backend and I note from the script which
bacula uses to setup the table "file" that fileid is of type serial.

bacula=> select max(fileid) from file;
max

 2147272756
(1 row)

2^31=2147483648


Postgresql's documentation states:

"The type names serial and serial4 are equivalent: both create integer
columns. The type names bigserial and serial8 work just the same way,
except that they create a bigint column. bigserial should be used if you
anticipate the use of more than 2^31 identifiers over the lifetime of
the table."

... but I'm not too keen on doing this unless I can be sure it won't
have a knock-on effect on Bacula.

What can I do to fix this?


James

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for a better setup

2008-11-12 Thread James Cort
junior.listas wrote:
> 1) mysql tables became huge ( each backup adds a million and six  
> hundred thousand lines ), so i split the configuration into 2 daemons 
> with 2 different bases, one for mon,tue,wed and other for thu,fri( and 
> one 3th for monthly bkps ) ; because between a backup starts, delete old 
> lines and start add newest lines take 45 mins; just to re-create the 
> catalog, before restore anything it takes 1 hour.

I have a similar problem with postgres; the catalog database is now so
large that restoring it from a pg_dump backup takes 3-4 hours.  This is
before you account for the length of time taken to read it from the tape.

In a DR scenario, that's 3-4 hours sitting there twiddling your thumbs
waiting for the catalog to come back so you can do some useful restores.

My solution so far has been to script a database sync and LVM snapshot
of the volume on which the database resides, and then backup the snapshot.

In theory at least, restoring the snapshot should give me the underlying
files back and it's undergone no worse than a system crash.  Provided
things like wal_sync are enabled, everything should be OK.

All my testing indicates that this should work but I'm a little nervous
as it's far from a properly supported solution.


James.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Don't understant recycling and Scratch pool

2008-11-13 Thread James Cort
LeJav wrote:
> Any explanation on the recycle mechanism is welcome !

The scratch pool is only used for newly labelled volumes.  If there are
no spare tapes in the pool that the current backup is taking place on,
Bacula will look to see if it can take one from the scratch pool.  It
won't look for volumes which may be recycled but are part of another
pool, nor will it move volumes back to scratch when they become
eligible for recycling.  Though that's an interesting idea.

HTH,

James.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Don't understant recycling and Scratch pool

2008-11-13 Thread James Cort
Attila Fülöp wrote:
> > You can use "RecyclePool = Scratch" in the pool resource:
> >
> > RecyclePool =  On versions 2.1.4 or greater,
> > this directive defines to which pool the Volume will be placed
(moved)
> > when it is recycled.
> >

Well, you learn something every day.

I'm not clear from the manual - I would think it logical that such a
line goes into the pool definition for the source pool, eg:


Pool {
  Name = DailyTape
  Pool Type = Backup
  Cleaning Prefix = "CLN"
  Recycle = Yes
  AutoPrune = yes
  Recycle Current Volume = yes
  Volume Retention = 27 days
  Volume Use Duration = 23 hours
  RecyclePool = Scratch
}

however, the manual states:

"This directive is probably most useful when defined in the Scratch
pool"  - surely the scratch pool is one pool in which you wouldn't
define this?

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Occasional read error when labelling tapes in autochanger (Bacula 2.4.3 on SLES 10.1)

2008-11-21 Thread James Cort
[EMAIL PROTECTED] wrote:

> block.c:999 Read error on fd=5 at file:blk 0:0 on device "LTO-4"
> (/dev/nst0). ERR=Input/output error.
> 3000 OK label. VolBytes=64512 DVD=0 Volume="weekly005" Device="LTO-4"
> (/dev/nst0)
> Catalog record for Volume "weekly005", Slot 11  successfully created.
> Requesting to mount LTO-4 ...
> 3001 Device "LTO-4" (/dev/nst0) is mounted with Volume "weekly005"
> 
> 
> Note the "Read error" near the bottom. The label operation seems to
> complete without problems (the tapes are correctly labelled) and
> subsequent backups work fine. But should I be concerned about this
> message or can I safely ignore it?

IIRC it's caused by Bacula trying to read the tape to see if it's
already labelled prior to writing its own label.  Obviously if the tape
is new, there won't be anything on there.

I've seen the exact same thing across two tape drives (and indeed types
of tape media) for the last couple of years - it can safely be ignored.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to set up large database backup

2008-11-25 Thread James Cort
David Jurke wrote:
> 
> The method we’re using for now is to back up the database by copying it
> to disk on the backup server (via NFS), and then back that up to tape.
> Trouble is, this is handling the data twice, and is currently taking
> well over twelve hours all up, which given the expected growth is going
> to become untenable fairly soon, so any suggestions gratefully received!
> Not to mention we’re going to run out of disk space!!

You don't mention what OS or DBMS the database is, given the context I'm
assuming it's for some other business process than Bacula.

Does the SAN support snapshots?  You could always take a snapshot, back
it up and then remove the snapshot afterwards.  There may need to be
some scripting to make sure the database is in a consistent state before
you do this, but it'd be a lot quicker than copying the data.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread James Cort
Silver Salonen wrote:
> On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
>> Hello all!
>>
>> Soon I will deploy a large email server - it will use maildirs and will 
>> be about 1Tb of mail with really many small files.
>>
>> It is any hints to make a backup via bacula of this?
>>
> I think Bacula is quite good for backing up maildirs as they constist of 
> separate files as e-mail messages. I don't think small files are a problem.

I don't think they're a problem either and I also backup a maildir-based
mail server.

However, one thing you may want to be aware of - unless you take
specific steps to avoid it, the maildirs on tape won't necessarily be in
a consistent state.  Obviously this won't affect your IMAP server - but
it does mean that when you restore, metadata like whether or not emails
have been read or replied to and recently received/sent email won't be a
perfect snapshot of how the mailserver looked at any given point in time.


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread James Cort
Arno Lehmann wrote:
> - Tune your catalog database for faster inserts. That can mean moving 
> it to a faster machine, assigning more memory for it, or dropping some 
> indexes (during inserts). If you're not yet using batch inserts, try 
> to recompile Bacula with batch-inserts enabled.

Is that where it inserts everything to a temporary table then copies the
entire table into the "live" one?


James.
-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Re: Large maildir backup

2008-11-27 Thread James Cort
Kevin Keane wrote:
> You are using a very old version of bacula! Maybe you can find a version 
> for your Linux distribution that is more current? I believe 2.4.3 is the 
> current one.
> 
> Boris Kunstleben onOffice Software GmbH wrote:
>> Hi,
>>
>> know i got all the necessary Information (bacula-director Version 1.38.11-8):
>>   
> 

That version number is, by an amazing coincidence, the exact patchlevel
assigned by Debian to their build in current Debian Stable.  2.4.3 is
available in Debian Backports and seems pretty stable to me.


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread James Cort
Tobias Bartel wrote:
> Hello,
> 
> i am tasked to set up daily full backups of our entire fax communication
> and they are all stored in one single director ;). There are about
> 800.000 files in that directory what makes accessing that directory
> extremely slow. The target device is a LTO3 tape drive with an 8 slots
> changer.
> 
> With my current configuration Bacula needs ~48h to make a complete
> backup which kinda conflicts with the requirement of doing dailys.
> 
> Does anybody have any suggestions on how i could speed things up? 

Even with 800,000 files, that sounds very slow.  How much data is
involved, how is it stored and how fast is your database server?


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up a VMware Server on 64-bit Linux

2008-12-02 Thread James Cort
ines
>>>> separately, especially since they all won't be powered up at all the 
>>>> times.
>>>> This is mostly about occasionally backing up the whole set of system 
>>>> files
>>>> in all the virtual machines, there would be only minimal amount of any 
>>>> user
>>>> data, since it would be actually a workstation with 2-3 different virtual
>>>> machines with different operating systems each, and users should keep the
>>>> actual data elsewhere on a "real" server.
>>>>
>>>>
>>>> Then, this is more like a Bacula thing: are there any "big" issues 
>>>> specific
>>>> to current Bacula clients in 64-bit RHEL5/CentOS5 systems running on
>>>> an Intel CPU (Core 2 etc, but not Xeon)?
>>>> Should I use i386 client in such an environment, or x86_64 which has a
>>>> remark "AMD architecture" on Sourceforge downloads?
>>>>
>>>>
>>>> Regards,
>>>> Timo
>>>>
>>>>   
>>> -- 
>>> Kevin Keane
>>> Owner
>>> The NetTech
>>> Turn your NetWORRY into a NetWORK!
>>>
>>> Office: 866-642-7116
>>> http://www.4nettech.com
>>>
>>> This e-mail and attachments, if any, may contain confidential and/or 
>>> proprietary information. Please be advised that the unauthorized use or 
>>> disclosure of the information is strictly prohibited. The information 
>>> herein is intended only for use by the intended recipient(s) named above. 
>>> If you have received this transmission in error, please notify the sender 
>>> immediately and permanently delete the e-mail and any copies, printouts or 
>>> attachments thereof.
>>>
>>>
>>> -
>>> This SF.Net email is sponsored by the Moblin Your Move Developer's 
>>> challenge
>>> Build the coolest Linux based applications with Moblin SDK & win great 
>>> prizes
>>> Grand prize is a trip for two to an Open Source event anywhere in the 
>>> world
>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ 
>>> 
>>
>>
>> -
>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
>> Build the coolest Linux based applications with Moblin SDK & win great prizes
>> Grand prize is a trip for two to an Open Source event anywhere in the world
>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>   
> 
> 


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Add forums to the main page?

2008-12-17 Thread James Cort
Sven-Hendrik Haase wrote:
> Forum spam shouldn't be a problem if a board system like PHPBB3 is used
> since it ships with a strong and configurable captcha. The PHPBB3 forums
> I run have so far not have had any problem at all with automated bots,
> unlike these mailings lists, it seems.

I must have missed something.  I keep a regular eye on a number of web
based forums and they've all seen a dramatic increase in spam in the
last few weeks.

This mailing list, on the other hand, (touch-wood) remains remarkably
spam-free.

> Other advantages I see compared to using mailing lists only:
> 1) Improved overview since it is visually more pleasing and more
> sub-categories can be used.

IME sub-categories don't work terribly well, either because people post
to the wrong subcategory (possibly they don't really understand the
system well enough to post to the right subcategory in the first place)
or because the people who can help are only browsing a few
subcategories.  They might work if you're dealing with a large system
(such as a full-blown Linux distribution) but I don't think they'd be
particularly helpful to a single application.

I think forums can work if:

- They are the first point of contact and always have been.

OR

- They are nicely integrated with a plain mailing list - I believe the
Gentoo forums do something like this.  An email reply automatically
appears on the forum, a forum reply is automatically sent via email.

In any case, you then need to manage a web-based forum, most of which
have a shockingly poor security track record.

> 2) It's easier to track complex topics.

As others have said, this is not a problem in any half-decent mail
client.  Even Outlook can be configured properly, though it may take
some work to do so - and automated mail filtering really isn't difficult
with any modern email client.

Sourceforge's search facility is truly abysmal, but gmane archives this
mailing list and has a pretty effective search facility.  This is
mentioned on the mailing lists page on the bacula website but it's right
at the bottom.  Perhaps it could be moved higher up the page?

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to handle long, time consuming backups

2008-12-30 Thread James Cort
Maarten Hoogveld wrote:
> Hello list,
> 
> Does anyone have an idea how to solve this efficiently or have any
> experience with this type of problem?

Yes, with exactly this type of problem.  I have 200GB of mostly-static
data spread fairly evenly over two sites.

The problem isn't just backup - how do you propose to restore 20GB?

My solution is to break the backup into two parts:

Part 1: Sync.  Use rsync to get a copy of the data from the server which
needs backing up to some location closer to your storage daemon.  Works
beautifully with Linux, rsync even supports ACLs.  You may need to do
some fiddling to make it work with Windows.  Provided it's data rather
than operating system that needs the backup, I can't see why it can't
work.  Driven by a pre-backup script.

The first time this is done, your backup takes a while.  Subsequent
syncs, OTOH, are much less painful.

Part 2: Spool to storage (be it tape, disk etc).  Because you're going
over the LAN here, you get LAN performance.

Restore:  May be just as easy to pull the data from bacula, put it on a
hard disk locally and courier it to the remote site.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula binaries for mac?

2009-01-02 Thread James Cort
Hemant Shah wrote:
> Folks,
> 
>   Is there a compiled version of bacula for Mac available anywhere?
>   I have g4, g5 and intel Macs. I can probably compile bacula on powerpc mac 
> because I have xcode installed on one of the systems. I would have to install 
> on xcode on my sons laptop (intel) which I would rather not do.

ICBW but ISTR that xcode on powerpc can be persuaded to create universal
binaries quite happily:

http://developer.apple.com/macosx/adoptinguniversalbinaries.html

> Is bat available on Mac? Any other GUI interface available for Mac?


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Have I got it wrong or should this be a feature request?

2009-02-04 Thread James Cort
Dear All,

I've been spending some time migrating jobs from an obsolete tape drive
which is connected to an SD on one server a few hundred miles away to a
rather newer tape unit connected to an SD on another server a few metres
away.

As far as I can gather, under 2.4.2 it isn't possible to directly
migrate data reading from one SD and writing to another.  Let me give
you an example of how I've worked around it, calling them SD1 and SD2.

A migration job does the following:
- Read the data from SD1
- Write it to a temporary staging pool (a file rather than a tape)
managed by SD1

Once that's complete, copy the files which make up the staging pool onto
the system where SD2 resides and update the database to reflect their
location in SD2.

Run another migration job which:

- Reads the data from the staging pool (which we've now copied to SD2)
- Writes it out to the tape managed by SD2.


Phew.

This leaves me with two questions:

1.  Is this the only realistic way to deal with the problem I originally
described?

2.  Am I the only one to have encountered such a problem?  (I'm not sure
 there's much point in submitting a feature request if I am because it
probably won't affect me again anyway).


James.
-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula and databases ...

2009-02-05 Thread James Cort
John Drescher wrote:
>> Hi every:
>> I've been using Bacula for a while and now I have a big question: it's 
>> possible to save the database data? I mean like an SQL query or a dump? Any 
>> plugins for do that?
>>
> I do it the same way bacula does it for the catalog. mysql or
> postgresql dump utilities. This is good if the database can be dumped
> in a few minutes. For example the 27GB database for my bacula catalog
> dumps in less than 10 minutes (actually the whole job finishes in that
> time).

You ever tried restoring a postgres dump?  I've spent some time trying
to sort mine out because the restore and re-index time was 3-4 hours and
the only solution to this was four times as many disks.

The solution I'm using right now is an LVM snapshot of the underlying
filesystem the database lives on then back that up.  In theory at least
(and I've asked on the Postgres IRC channel) that should be fine -
though the database will have to do crash recovery when it restarts.


James.

-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax developers can use existing skills and code to
build responsive, highly engaging applications that combine the power of local
resources and data with the reach of the web. Download the Adobe AIR SDK and
Ajax docs to start building applications today-http://p.sf.net/sfu/adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users