[Bacula-users] Tapes in error status

2007-08-07 Thread Juan Asensio Sánchez
Hi

When some error ocurrs when doing a backup, the tape used get the
status "Error". How can i change the status so i can use again that
tape without needing to delete it and add it again? If i manually set
the status "Append" the job runs but i get an error saying that the
number of file in the tape is not equal to the number of files in
database.

Thanks.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tapes in error status

2007-08-07 Thread Troy Daniels
Hi,

Juan Asensio Sánchez wrote:
> Hi
> 
> When some error ocurrs when doing a backup, the tape used get the
> status "Error". How can i change the status so i can use again that
> tape without needing to delete it and add it again? If i manually set
> the status "Append" the job runs but i get an error saying that the
> number of file in the tape is not equal to the number of files in
> database.
> 

If you just want to reuse the tape you can mark it as used or full 
instead and it will be recycled as part of your normal tape rotation.

If you want to reuse it immediately you can use the 'prune' command to 
prune the volume's files/jobs from the catalog - this should allow you 
to re-use it immediately. (But will result in loss of whatever data was 
successfully backed up to it!)

If you want to continue using the tape you could try updating your 
database to set the number of files to the value Bacula finds on the 
tape. As I understand it this can have very mixed results, depending on 
the state of the data on the tape and should only be done if you know 
what you are doing.

Someone else will need to provide on specifics on how to do this however 
as I've never performed this particular task myself.

Cheers,


Troy Daniels.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] clarification: VolumeToCatalog and DiskToCatalog verify jobs

2007-08-07 Thread Ralf Gross
Hi,

I need a clarification on how a VolumeToCatalog verify job works. Until now I
thought this type of verify would read the attributes from tape (Volume) and
compares it with the attributes in the db (Catalog).

But I see high network traffic between bacula-dir and bacula-fd on the client.
The level 200 debug output during a VolumeToCatalog job on the client shows
that the attributes are read from disk.

bacula-fd:

VU0EM003: job.c:232 dird: 2000 OK verify
VU0EM003: job.c:1669 VolSessId=1 VolsessT=1186423732 SF=0 EF=0
VU0EM003: job.c:1670 JobId=429 vol=DummyVolume
VU0EM003: job.c:1677 >stored: read open session = DummyVolume 1 1186423732 0 0 
0 0
VU0EM003: job.c:1683 bfiledstored: read data 1
VU0EM003: job.c:1745 3000 OK data

VU0EM003: verify_vol.c:102 Got hdr: FilInx=1 Stream=1.
VU0EM003: verify_vol.c:115 Got stream data, len=77
VU0EM003: verify_vol.c:149 Got Attr: FilInx=1 type=3
VU0EM003: verify_vol.c:168 Attr=GgB MY5T Int B A A A QVg BAA CQ BGs+ze BF3Inr 
BGNyXj A A C
VU0EM003: verify_vol.c:197 send ATTR inx=1 fname=/bin/umount
VU0EM003: verify_vol.c:207 bfiled>bdird: attribs len=84: msg=1 1 pinsug5 
/bin/umount


<---
http://www.bacula.org/rel-manual/Configuring_Director.html#JobResource
VolumeToCatalog
   This level causes Bacula to read the file attribute data
written to the Volume from the last Job. The file attribute data are compared
to the values saved in the Catalog database and any differences are reported.
This is similar to the Catalog level except that instead of comparing the disk
file attributes to the catalog database, the attribute data written to the
Volume is read and compared to the catalog database. Although the attribute
data including the signatures (MD5 or SHA1) are compared, the actual file data
is not compared (it is not in the catalog).

DiskToCatalog 
   This level causes Bacula to read the files as they currently are
on disk, and to compare the current file attributes with the attributes saved
in the catalog from the last backup for the job specified on the VerifyJob
directive. This level differs from the Catalog level described above by the
fact that it doesn't compare against a previous Verify job but against a
previous backup. When you run this level, you must supply the verify options on
your Include statements. Those options determine what attribute fields are
compared.
This command can be very useful if you have disk problems because it will
compare the current state of your disk against the last successful backup,
which may be several jobs. 
--->


If VolumeToCatalog reads the attributes from disk, where is the difference to
DiskToCatalog?

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Andreas Kopecki
Hi!

I tried to get concurrent jobs running, but somehow I couldn't get it going, 
although I double checked every concurrency option available.

I use Bacula 2.0.3 and the director reports me the following concurrency:

Storage: name=GRAU MaxJobs=5
Director: name=hadrian-dir MaxJobs=5
client: name=visper-fd MaxJobs=20

And several jobs derived from a single JobDef
Job: name=VISPER-XXX.Backup-Full JobType=66 level=Full MaxJobs=4 Spool=1

If scheduled jobs are running, they will end up as:
879 FullVISPER-XXX.Backup-Full.xxx is running
880 FullVISPER-YYY.Backup-Full.yyy is waiting on max Storage jobs

Manually started jobs are listed as
889 FullVISPER-ZZZ.Backup-Full.zzz is waiting execution

Did I forget something? Does anybody have a clue, why the jobs are not running 
concurrently?

Thanks for any help,

Andreas

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fwd: bacula-fd

2007-08-07 Thread John Drescher
-- Forwarded message --
From: John Drescher <[EMAIL PROTECTED]>
Date: Aug 7, 2007 4:46 AM
Subject: Re: [Bacula-users] bacula-fd
To: tanveer haider <[EMAIL PROTECTED]>



On 8/7/07, tanveer haider <[EMAIL PROTECTED]> wrote:
>
> I am receiving error while make, the file is attached that contain the
> make result.
> thanks
> tanveer


The problem is:

make[1]: ar: Command not found

Although I am no solaris expert, I believe
http://www.mail-archive.com/[EMAIL PROTECTED]/msg09581.htmlmay
help.

John





-- 
John M. Drescher
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] clarification: VolumeToCatalog and DiskToCatalog verify jobs

2007-08-07 Thread Troy Daniels
Hi,

Ralf Gross wrote:
> Hi,
> 
> I need a clarification on how a VolumeToCatalog verify job works. Until now I
> thought this type of verify would read the attributes from tape (Volume) and
> compares it with the attributes in the db (Catalog).
> 

Technically true, but I believe it's the file daemon that does the 
comparison (it has all the decryption/md5/checksum code, etc for them) - 
so the data is fed to the client to compare.

I run my VolumeToCatalog verify jobs with my backup server specified as 
the client to minimise Network impact for this reason.

> But I see high network traffic between bacula-dir and bacula-fd on the client.
> The level 200 debug output during a VolumeToCatalog job on the client shows
> that the attributes are read from disk.
> 

Not sure where in the following it's specifically saying it's getting it 
from the disk, I read it as reporting the attributes of the file on 
tape. (which has a path that also happens to be on the disk...)

Running something like 'lsof' during a verify will confirm this for 
sure, however as I mentioned above my verifies run thru my backup 
servers FD without issue, and it sure doesn't have the 400+Gb of files 
that exist on the fileserver on it.

> bacula-fd:
> 
> VU0EM003: job.c:232  VU0EM003: job.c:248 Executing verify command.
> VU0EM003: pythonlib.c:237 No startup module.
> VU0EM003: job.c:1513 bfiled>dird: 2000 OK verify
> VU0EM003: job.c:1669 VolSessId=1 VolsessT=1186423732 SF=0 EF=0
> VU0EM003: job.c:1670 JobId=429 vol=DummyVolume
> VU0EM003: job.c:1677 >stored: read open session = DummyVolume 1 1186423732 0 
> 0 0 0
> VU0EM003: job.c:1683 bfiled VU0EM003: job.c:1688 bfiled: got Ticket=1
> VU0EM003: job.c:1745 3000 OK bootstrap
> VU0EM003: job.c:1702 >stored: read data 1
> VU0EM003: job.c:1745 3000 OK data
> 
> VU0EM003: verify_vol.c:102 Got hdr: FilInx=1 Stream=1.
> VU0EM003: verify_vol.c:115 Got stream data, len=77
> VU0EM003: verify_vol.c:149 Got Attr: FilInx=1 type=3
> VU0EM003: verify_vol.c:168 Attr=GgB MY5T Int B A A A QVg BAA CQ BGs+ze BF3Inr 
> BGNyXj A A C
> VU0EM003: verify_vol.c:197 send ATTR inx=1 fname=/bin/umount
> VU0EM003: verify_vol.c:207 bfiled>bdird: attribs len=84: msg=1 1 pinsug5 
> /bin/umount
> 
> 
> <---
> http://www.bacula.org/rel-manual/Configuring_Director.html#JobResource
> VolumeToCatalog
>This level causes Bacula to read the file attribute data
> written to the Volume from the last Job. The file attribute data are compared
> to the values saved in the Catalog database and any differences are reported.
> This is similar to the Catalog level except that instead of comparing the disk
> file attributes to the catalog database, the attribute data written to the
> Volume is read and compared to the catalog database. Although the attribute
> data including the signatures (MD5 or SHA1) are compared, the actual file data
> is not compared (it is not in the catalog).
> 
> DiskToCatalog 
>This level causes Bacula to read the files as they currently are
> on disk, and to compare the current file attributes with the attributes saved
> in the catalog from the last backup for the job specified on the VerifyJob
> directive. This level differs from the Catalog level described above by the
> fact that it doesn't compare against a previous Verify job but against a
> previous backup. When you run this level, you must supply the verify options 
> on
> your Include statements. Those options determine what attribute fields are
> compared.
> This command can be very useful if you have disk problems because it will
> compare the current state of your disk against the last successful backup,
> which may be several jobs. 
> --->
> 

I guess a manual update might be in order to highlight this behaviour.

Cheers,


Troy Daniels.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] clarification: VolumeToCatalog and DiskToCatalog verify jobs

2007-08-07 Thread Ralf Gross
Troy Daniels schrieb:
> > I need a clarification on how a VolumeToCatalog verify job works. Until now 
> > I
> > thought this type of verify would read the attributes from tape (Volume) and
> > compares it with the attributes in the db (Catalog).
> 
> Technically true, but I believe it's the file daemon that does the 
> comparison (it has all the decryption/md5/checksum code, etc for them) - 
> so the data is fed to the client to compare.

Ok, that makes sense now. I found verify.c and verify_vol.c in
src/filed.
 
> I run my VolumeToCatalog verify jobs with my backup server specified as 
> the client to minimise Network impact for this reason.

So the Client option doesn't have to be the same as the client name
that was used for the backup? I've never thought about this. 
 
> > But I see high network traffic between bacula-dir and bacula-fd on the 
> > client.
> > The level 200 debug output during a VolumeToCatalog job on the client shows
> > that the attributes are read from disk.
> > 
> 
> Not sure where in the following it's specifically saying it's getting it 
> from the disk, I read it as reporting the attributes of the file on 
> tape. (which has a path that also happens to be on the disk...)

I interpret the traffic to the fd wrong, given the fact that the
verify code is there.
 
> Running something like 'lsof' during a verify will confirm this for 
> sure, however as I mentioned above my verifies run thru my backup 
> servers FD without issue, and it sure doesn't have the 400+Gb of files 
> that exist on the fileserver on it.
> 
> > bacula-fd:
> > 
> > VU0EM003: job.c:232  > VU0EM003: job.c:248 Executing verify command.
> > VU0EM003: pythonlib.c:237 No startup module.
> > VU0EM003: job.c:1513 bfiled>dird: 2000 OK verify
> > VU0EM003: job.c:1669 VolSessId=1 VolsessT=1186423732 SF=0 EF=0
> > VU0EM003: job.c:1670 JobId=429 vol=DummyVolume
> > VU0EM003: job.c:1677 >stored: read open session = DummyVolume 1 1186423732 
> > 0 0 0 0
> > VU0EM003: job.c:1683 bfiled > VU0EM003: job.c:1688 bfiled: got Ticket=1
> > VU0EM003: job.c:1745 3000 OK bootstrap
> > VU0EM003: job.c:1702 >stored: read data 1
> > VU0EM003: job.c:1745 3000 OK data
> > 
> > VU0EM003: verify_vol.c:102 Got hdr: FilInx=1 Stream=1.
> > VU0EM003: verify_vol.c:115 Got stream data, len=77
> > VU0EM003: verify_vol.c:149 Got Attr: FilInx=1 type=3
> > VU0EM003: verify_vol.c:168 Attr=GgB MY5T Int B A A A QVg BAA CQ BGs+ze 
> > BF3Inr BGNyXj A A C
> > VU0EM003: verify_vol.c:197 send ATTR inx=1 fname=/bin/umount
> > VU0EM003: verify_vol.c:207 bfiled>bdird: attribs len=84: msg=1 1 pinsug5 
> > /bin/umount
> > 
> > 
> > <---
> > http://www.bacula.org/rel-manual/Configuring_Director.html#JobResource
> > VolumeToCatalog
> >This level causes Bacula to read the file attribute data
> > written to the Volume from the last Job. The file attribute data are 
> > compared
> > to the values saved in the Catalog database and any differences are 
> > reported.
> > This is similar to the Catalog level except that instead of comparing the 
> > disk
> > file attributes to the catalog database, the attribute data written to the
> > Volume is read and compared to the catalog database. Although the attribute
> > data including the signatures (MD5 or SHA1) are compared, the actual file 
> > data
> > is not compared (it is not in the catalog).
> > 
> > DiskToCatalog 
> >This level causes Bacula to read the files as they currently are
> > on disk, and to compare the current file attributes with the attributes 
> > saved
> > in the catalog from the last backup for the job specified on the VerifyJob
> > directive. This level differs from the Catalog level described above by the
> > fact that it doesn't compare against a previous Verify job but against a
> > previous backup. When you run this level, you must supply the verify 
> > options on
> > your Include statements. Those options determine what attribute fields are
> > compared.
> > This command can be very useful if you have disk problems because it 
> > will
> > compare the current state of your disk against the last successful backup,
> > which may be several jobs. 
> > --->
> > 
> 
> I guess a manual update might be in order to highlight this behaviour.

Yes, that would make things clearer.

Now I've to find out what was going wrong with the two verify jobs of
my full backups from last weekend (mail from yesterday). 

The inc/diff verify jobs before were ok and the verify job of the inc
backup from this night too. Is there a way to rerun the verify job of
the full backup (sunday) after the next verify job (last night)?

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list

Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread John Drescher
On 8/7/07, Andreas Kopecki <[EMAIL PROTECTED]> wrote:
>
> Hi!
>
> I tried to get concurrent jobs running, but somehow I couldn't get it
> going,
> although I double checked every concurrency option available.
>


Do you have the Maximum Concurrent Jobs in at least 3 places in the
bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as the
Storage and each client in that bacula-dir.conf file as well as the
bacula-fd.conf on the clients and bacula-sd.conf too.

John
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula error - poller_output.MYI

2007-08-07 Thread Monstad


Hi there,

For the past day bacula has been unresponsive eg: not mounting tapes, not
purging records etc.

Today I got this mail: 




*cacti.poller_output
error: Can't find file: 'poller_output.MYI' (errno: 2)
cacti.poller_time
warning  : Table is marked as crashed
error: Found key at page 1024 that points to record outside datafile
error: Corrupt
syslog.logs
warning  : Table is marked as crashed
error: Size of indexfile is: 5218078720Should be: 5218079744
error: Corrupt

 Improperly closed tables are also reported if clients are accessing
 the tables *now*. A list of current connections is below.

++--+---++-+--+---+--+
| Id | User | Host  | db | Command | Time | State | Info

|
++--+---++-+--+---+--+
| 52 | debian-sys-maint | localhost || Query   | 0|   | show
processlist |
++--+---++-+--+---+--+
Uptime: 4  Threads: 1  Questions: 379  Slow queries: 0  Opens: 255  Flush
tables: 1  Open tables: 64  Queries per second avg: 94.750*


Can anyone suggest how to repair this? I'm afraid I'm relatively new to
bacula (and I apologise if this is a documented problem).

I would be extremely grateful for any advice,

Kris


-- 
View this message in context: 
http://www.nabble.com/bacula-error---poller_output.MYI-tf4229177.html#a12031263
Sent from the Bacula - Users mailing list archive at Nabble.com.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Mike Follwerk - T²BF
Radek Hladik schrieb:
> Till now I came up with this ideas:
> * Backup catalog and bootstrap files with the data
> * Disable jobs/files/volumes autopruning
> * maybe modify some livecd to contain current version of bacula or at
> least bscan (do not know, maybe such a CD exists)
> * Create SQL query to list which jobs is on which tape and print it on
> the paper with the tapes
> 
> Do you think this is enough or am I overseeing something?

since no one has answered this yet, I feel free to voice a blind guess:
doesn't bacula have some kind of "archive" flag for volumes for
precisely this reason? I seem to remember something like that from the
documentation.


Regards
Mike Follwerk

-- 

T²BF IT Services GbR
Daniel Blömer, Mike Follwerk, Marcus Teske
Marie-Curie-Str. 1
D-53359 Rheinbach

Umsatzsteuer-Identifikationsnummer
gem. § 27 a Umsatzsteuergesetz:DE 238268154

Tel. +492226 / 87 21 40
Fax: +492226 / 87 21 49
http://www.tbf-it.de/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] fileset

2007-08-07 Thread Alexandre Chapellon
Hello,

I am running bacula on mysite (about 100 servers, with differents OS),
and am quite happy about it.
Recently I changed my filesets to speedup the backup and avoid
compression problem with some files.
What I want is:

 - not to compress already compressed files (eg: jpg...)
 - not to save at all "useless" files (eg: .ldb access lock files...)

So I wrote the following fileset:


FileSet {
  Name = "DataToSave"
  Include {
Options {  ## Normal compression
  signature = MD5  ## for
  compression = GZIP   ## normals files
}
Options {
  signature = MD5  ## list of
  compression = GZIP1  ## extensions
  wildfile = "*.mpg"   ## I would
  wildfile = "*.mpeg"  ## like not to
  wildfile = "*.pdf"   ## be compressed
  wildfile = "*.gz"## ...
  wildfile = "*.tgz"   ## In fact
  wildfile = "*.zip"   ## I use the
  wildfile = "*.rar"   ## lower compression
  wildfile = "*.mdb"   ## level, because
  wildfile = "*.avi"   ## I couldn't
  wildfile = "*.flv"   ## find out
  wildfile = "*.swf"   ## how to
  wildfile = "*.gif"   ## completly disable
  wildfile = "*.png"   ## software compression.
  wildfile = "*.jpg"   ##
  wildfile = "*.jpeg"  ##
}
Options {
  exclude = yes
  wildfile = "*.LDF"   ## list of
  wildfile = "*.MDF"   ## extensions I want to
  wildfile = "*.ldb"   ## exclude completly
}
  File = "d:/PathToSave"   ## The place I want to backup
  }
}


Can anyone can take a look at it and tell me if the will behave as
expected.

thanks guys! (and girls :p)
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Radek Hladik
Mike Follwerk - T²BF napsal(a):
> Radek Hladik schrieb:
>> Till now I came up with this ideas:
>> * Backup catalog and bootstrap files with the data
>> * Disable jobs/files/volumes autopruning
>> * maybe modify some livecd to contain current version of bacula or at
>> least bscan (do not know, maybe such a CD exists)
>> * Create SQL query to list which jobs is on which tape and print it on
>> the paper with the tapes
>>
>> Do you think this is enough or am I overseeing something?
> 
> since no one has answered this yet, I feel free to voice a blind guess:
> doesn't bacula have some kind of "archive" flag for volumes for
> precisely this reason? I seem to remember something like that from the
> documentation.
> 
> 
> Regards
> Mike Follwerk
> 

I've noticed this to and search the documentation for "archive", the 
only relevant I've found:

*Archive
 An Archive operation is done after a Save, and it consists of 
removing the Volumes on which data is saved from active use. These 
Volumes are marked as Archived, and may no longer be used to save files. 
All the files contained on an Archived Volume are removed from the 
Catalog. NOT YET IMPLEMENTED.

It is not yet implemented and it seems like only half of what I need. I 
do not want to reuse these media unless I manually mark them as 
available but I would like to keep file information in catalog so in 
case of recovery I do not need to run bscan.

Radek

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Andreas Kopecki
Hi John!

On Tuesday 07 August 2007, John Drescher wrote:
> On 8/7/07, Andreas Kopecki <[EMAIL PROTECTED]> wrote:
> > I tried to get concurrent jobs running, but somehow I couldn't get it
> > going,
> > although I double checked every concurrency option available.
>
> Do you have the Maximum Concurrent Jobs in at least 3 places in the
> bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
> the Storage and each client in that bacula-dir.conf file as well as the
> bacula-fd.conf on the clients and bacula-sd.conf too.

The Maximum Concurrent Jobs in the sd and the client conf is set to 20... so 
there shouldn't be a limitation there, also. So, do you have any other clues?

Thank you,
   Andreas


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fileset

2007-08-07 Thread Julien
Hi,

I would change the order because of the "first match" rule :

FileSet {
  Name = "DataToSave"
  Include {
Options {
  exclude = yes 
  wildfile = "*.LDF"   ## list of
  wildfile = "*.MDF"   ## extensions I want to
  wildfile = "*.ldb"   ## exclude completly
}
Options {
  signature = MD5  ## list of
  compression = GZIP1  ## extensions
  wildfile = "*.mpg"   ## I would
  wildfile = "*.mpeg"  ## like not to
  wildfile = "*.pdf"   ## be compressed
  wildfile = "*.gz"## ...
  wildfile = "*.tgz"   ## In fact
  wildfile = "*.zip"   ## I use the 
  wildfile = "*.rar"   ## lower compression
  wildfile = "*.mdb"   ## level, because
  wildfile = "*.avi"   ## I couldn't
  wildfile = "*.flv"   ## find out
  wildfile = "*.swf"   ## how to
  wildfile = "*.gif"   ## completly disable
  wildfile = "*.png"   ## software compression.
  wildfile = "*.jpg"   ##
  wildfile = "*.jpeg"  ##
}
Options {  ## Normal compression
  signature = MD5  ## for
  compression = GZIP   ## normals files
}
  File = "d:/PathToSave"   ## The place I want to backup
  }
}

(to check ...)


On Tue, 2007-08-07 at 11:48 +0200, Alexandre Chapellon wrote:
> FileSet {
>   Name = "DataToSave"
>   Include {
> Options {  ## Normal compression
>   signature = MD5  ## for
>   compression = GZIP   ## normals files
> }
> Options {
>   signature = MD5  ## list of
>   compression = GZIP1  ## extensions
>   wildfile = "*.mpg"   ## I would
>   wildfile = "*.mpeg"  ## like not to
>   wildfile = "*.pdf"   ## be compressed
>   wildfile = "*.gz"## ...
>   wildfile = "*.tgz"   ## In fact
>   wildfile = "*.zip"   ## I use the 
>   wildfile = "*.rar"   ## lower compression
>   wildfile = "*.mdb"   ## level, because
>   wildfile = "*.avi"   ## I couldn't
>   wildfile = "*.flv"   ## find out
>   wildfile = "*.swf"   ## how to
>   wildfile = "*.gif"   ## completly disable
>   wildfile = "*.png"   ## software compression.
>   wildfile = "*.jpg"   ##
>   wildfile = "*.jpeg"  ##
> }
> Options {
>   exclude = yes 
>   wildfile = "*.LDF"   ## list of
>   wildfile = "*.MDF"   ## extensions I want to
>   wildfile = "*.ldb"   ## exclude completly
> }
>   File = "d:/PathToSave"   ## The place I want to backup
>   }
> } 


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] exclusion question

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 04:02,, Mark Nienberg wrote::
> Mark Nienberg wrote:
> 
>> If not, is it possible to simulate it with an option something like this:
>>
>> Include {
>>  Options {
>>  Exclude = yes
>>  }
>>  File = \> }
>>
>> where the "program.to.run.on.client" would search for a particular file name 
>> and 
>> create a list of directories where it is present.

That is possible. Bacula itself does not directly support "marker 
files" to prevent or force backup of certain directories or files, by 
the way (which is a conscious design decision, and a good one, IMO).

> A closer reading of the docs makes me think that the 
> "program.to.run.on.client" would 
> have to be run from a cron job or maybe a "Client Run Before Job" and then
> 
> File = \ 
> How about that?

Also possible.

But not that the *exact* syntax to use is a little different to your 
first example :-)

The pipe character "|" is used to indicate the output of the given 
program should be included, so your first example should have been 
\|... plus additional backslashes for quoting purposes. All very 
complicated ;-) but explained with examples in the file set chapter of 
the manual.


> Mark
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tapes in error status

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 10:09,, Troy Daniels wrote::
> Hi,
> 
> Juan Asensio Sánchez wrote:
>> Hi
>>
>> When some error ocurrs when doing a backup, the tape used get the
>> status "Error". How can i change the status so i can use again that
>> tape without needing to delete it and add it again? If i manually set
>> the status "Append" the job runs but i get an error saying that the
>> number of file in the tape is not equal to the number of files in
>> database.
>>
> 
> If you just want to reuse the tape you can mark it as used or full 
> instead and it will be recycled as part of your normal tape rotation.

I'd recommend "Used", not "Full", to allow identification of full 
tapes for example to estimate the usual capacity of your tapes.

> If you want to reuse it immediately you can use the 'prune' command to 
> prune the volume's files/jobs from the catalog - this should allow you 
> to re-use it immediately. (But will result in loss of whatever data was 
> successfully backed up to it!)

The volume needs to be out of "Error" state before pruning, IIRC.

> If you want to continue using the tape you could try updating your 
> database to set the number of files to the value Bacula finds on the 
> tape. As I understand it this can have very mixed results, depending on 
> the state of the data on the tape and should only be done if you know 
> what you are doing.

Yes. It *is* possible to overwrite perfectly good data by setting the 
number of files incorrectly. But that also depends on your drive setup 
- in most cases, the risk should be very low.

> Someone else will need to provide on specifics on how to do this however 
> as I've never performed this particular task myself.

The simplest way is to use 'update volume=' in bconsole and 
use the menu. One of the options is to set the number of files on the 
volumes.

Arno

> Cheers,
> 
> 
> Troy Daniels.
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula-usersFedora 7 rpms

2007-08-07 Thread Alan Brown
On Mon, 6 Aug 2007, Alan Brown wrote:

>> Actually, it's now in both 7 and fc6 extras too!  W00t!
>
> There's an RFE (Feature request) in with Redhat for inclusion of Bacula in
> RHEL4 and 5. I filed that in January.

Oops, July 2006, last updated in January.



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fileset

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 11:48,, Alexandre Chapellon wrote::
> Hello,
> 
> I am running bacula on mysite (about 100 servers, with differents OS),
> and am quite happy about it.
> Recently I changed my filesets to speedup the backup and avoid
> compression problem with some files.
> What I want is:
> 
>  - not to compress already compressed files (eg: jpg...)
>  - not to save at all "useless" files (eg: .ldb access lock files...)
> 
> So I wrote the following fileset:

Oh, filesets with exclusions and wildcards... my favourite waste of 
time in Bacula ;-)

Just one hint for now: The options section that matches *first* will 
be applied, so you should define the options starting with the most 
specific definitions first.

In your case, the first option block could be the one defining the 
excludes, then the one defining the files not to get compressed - 
simply omit the compression keyword here - and lastly, as a catch-all 
for the remaining files, set your defaults including compression.

Also note that a fileset change will trigger a new full backup, so 
before playing around with the options, set "Ignore Fileset Changes", 
or use the "estimate listing" command to verify your fileset works as 
it should.

Arno

> 
> FileSet {
>   Name = "DataToSave"
>   Include {
> Options {  ## Normal compression
>   signature = MD5  ## for
>   compression = GZIP   ## normals files
> }
> Options {
>   signature = MD5  ## list of
>   compression = GZIP1  ## extensions
>   wildfile = "*.mpg"   ## I would
>   wildfile = "*.mpeg"  ## like not to
>   wildfile = "*.pdf"   ## be compressed
>   wildfile = "*.gz"## ...
>   wildfile = "*.tgz"   ## In fact
>   wildfile = "*.zip"   ## I use the
>   wildfile = "*.rar"   ## lower compression
>   wildfile = "*.mdb"   ## level, because
>   wildfile = "*.avi"   ## I couldn't
>   wildfile = "*.flv"   ## find out
>   wildfile = "*.swf"   ## how to
>   wildfile = "*.gif"   ## completly disable
>   wildfile = "*.png"   ## software compression.
>   wildfile = "*.jpg"   ##
>   wildfile = "*.jpeg"  ##
> }
> Options {
>   exclude = yes
>   wildfile = "*.LDF"   ## list of
>   wildfile = "*.MDF"   ## extensions I want to
>   wildfile = "*.ldb"   ## exclude completly
> }
>   File = "d:/PathToSave"   ## The place I want to backup
>   }
> }
> 
> 
> Can anyone can take a look at it and tell me if the will behave as
> expected.
> 
> thanks guys! (and girls :p)
> 
> 
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 11:54,, Andreas Kopecki wrote::
> Hi John!
> 
> On Tuesday 07 August 2007, John Drescher wrote:
>> On 8/7/07, Andreas Kopecki <[EMAIL PROTECTED]> wrote:
>>> I tried to get concurrent jobs running, but somehow I couldn't get it
>>> going,
>>> although I double checked every concurrency option available.
>> Do you have the Maximum Concurrent Jobs in at least 3 places in the
>> bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
>> the Storage and each client in that bacula-dir.conf file as well as the
>> bacula-fd.conf on the clients and bacula-sd.conf too.
> 
> The Maximum Concurrent Jobs in the sd and the client conf is set to 20... so 
> there shouldn't be a limitation there, also. So, do you have any other clues?

In the DIR configuration, there is not only the overall concurrency 
limit. In the storage resource there you have to set the "Maximum 
Concurrent Jobs", too. So you need at least
DIR configuration, global setting
DIR configuration, storage resource
SD configuration, global setting
Optionally and depending on your needs: Client global setting, 
cliant-specific setting in DIR, job-specific setting.
Note that there is *no* setting for the storage device specific 
setting in the SD.

Arno

> Thank you,
>Andreas
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula error - poller_output.MYI

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 11:43,, Monstad wrote::
> 
> Hi there,
> 
> For the past day bacula has been unresponsive eg: not mounting tapes, not
> purging records etc.
> 
> Today I got this mail: 
> 
> 
> 
> 
> *cacti.poller_output
> error: Can't find file: 'poller_output.MYI' (errno: 2)
> cacti.poller_time
> warning  : Table is marked as crashed
> error: Found key at page 1024 that points to record outside datafile
> error: Corrupt
> syslog.logs
> warning  : Table is marked as crashed
> error: Size of indexfile is: 5218078720Should be: 5218079744
> error: Corrupt
> 
>  Improperly closed tables are also reported if clients are accessing
>  the tables *now*. A list of current connections is below.
> 
> ++--+---++-+--+---+--+
> | Id | User | Host  | db | Command | Time | State | Info  
>   
> |
> ++--+---++-+--+---+--+
> | 52 | debian-sys-maint | localhost || Query   | 0|   | show
> processlist |
> ++--+---++-+--+---+--+
> Uptime: 4  Threads: 1  Questions: 379  Slow queries: 0  Opens: 255  Flush
> tables: 1  Open tables: 64  Queries per second avg: 94.750*

Your MySQL is databases are broken. MySQL does write a log file where 
it reports errors, and the system log might also have valuable 
information. Shut down MySQL and run the related tools to check and 
repair the databases. There is information in the MySQL manual. One of 
the more common problems, in my experience, is the infamous "disk 
full" one, by the way :-)

As the problem is not directly related to Bacula, I can only recommend 
to ask for more detailed help in the MySQL-related mailing lists, 
forums, or newsgroups.

Once you've got the database running again and checked the 
Bacula-related ones with dbcheck, I'm sure we can help you restoring 
your data or getting Bacula running again.

Arno

> Can anyone suggest how to repair this? I'm afraid I'm relatively new to
> bacula (and I apologise if this is a documented problem).
> 
> I would be extremely grateful for any advice,
> 
> Kris
> 
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
> Radek Hladik schrieb:
>> Till now I came up with this ideas:
>> * Backup catalog and bootstrap files with the data
>> * Disable jobs/files/volumes autopruning
>> * maybe modify some livecd to contain current version of bacula or at
>> least bscan (do not know, maybe such a CD exists)
>> * Create SQL query to list which jobs is on which tape and print it on
>> the paper with the tapes
>>
>> Do you think this is enough or am I overseeing something?

I didn't actually read your original mail, but going from the subject, 
this sounds good already. Having the job lists on paper is definitely 
helpful sometimes, but I'd mainly make sure to know where the current 
catalog dump is stored, though. The catalog has all the information 
you will need, readily accessible by Bacula. If you only have your 
printout, you'll probably need some time for bscanning, but 
bextracting the catalog dump, loading it and starting Bacula itself 
might be faster.

> since no one has answered this yet, I feel free to voice a blind guess:
> doesn't bacula have some kind of "archive" flag for volumes for
> precisely this reason? I seem to remember something like that from the
> documentation.

Well, the something includes a "not implemented" remark, unfortunately :-)

But you don't really need that, anyway: Just make sure your long term 
volumes are not automatically pruned and you're already half the way 
where you want to go... the other half of the way is usually deciding 
if you need complete file lists for your archives (then you probably 
want to set up separate job and client entries for these, so you can 
have different retention times for normal production backups) or if 
the fact that the data exists and on which volumes it's stored is 
enough (Then just make sure the jobs are not pruned from the catalog).

For real long-term storage, you'll have to find ways to move data to 
new tapes from time to time, probably keeping the catalog up to date, 
and so on, so that you can restore when the original tapes can't any 
longer be read. Migration might be helpful for this, but that's a 
different story...

Arno


> Regards
> Mike Follwerk
> 

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula error - poller_output.MYI

2007-08-07 Thread Frank Sweetser
Monstad wrote:
> 
> Hi there,
> 
> For the past day bacula has been unresponsive eg: not mounting tapes, not
> purging records etc.
> 
> Today I got this mail: 
> 
> 
> 
> 
> *cacti.poller_output
> error: Can't find file: 'poller_output.MYI' (errno: 2)
> cacti.poller_time
> warning  : Table is marked as crashed

This page should get you started on repairing mysql:

http://dev.mysql.com/doc/refman/5.0/en/repair.html

Once that's done, make sure to run a dbcheck to look for any higher level 
inconsistencies as well.

  - Frank


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Andreas Kopecki
Hi Arno!

On Tuesday 07 August 2007, Arno Lehmann wrote:
> 07.08.2007 11:54,, Andreas Kopecki wrote::
> > On Tuesday 07 August 2007, John Drescher wrote:
> >> On 8/7/07, Andreas Kopecki <[EMAIL PROTECTED]> wrote:
> >>> I tried to get concurrent jobs running, but somehow I couldn't get it
> >>> going,
> >>> although I double checked every concurrency option available.
> >>
> >> Do you have the Maximum Concurrent Jobs in at least 3 places in the
> >> bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
> >> the Storage and each client in that bacula-dir.conf file as well as the
> >> bacula-fd.conf on the clients and bacula-sd.conf too.
> >
> > The Maximum Concurrent Jobs in the sd and the client conf is set to 20...
> > so there shouldn't be a limitation there, also. So, do you have any other
> > clues?
>
> In the DIR configuration, there is not only the overall concurrency
> limit. In the storage resource there you have to set the "Maximum
> Concurrent Jobs", too. So you need at least
> DIR configuration, global setting
> DIR configuration, storage resource

I set everything in the configuration files, and the director seems to have 
them picked up (show xxx) as can be seen from my first mail:

Storage: name=GRAU MaxJobs=5
Director: name=hadrian-dir MaxJobs=5
client: name=visper-fd MaxJobs=20

And several jobs derived from a single JobDef
Job: name=VISPER-XXX.Backup-Full JobType=66 level=Full MaxJobs=4 Spool=1

> SD configuration, global setting

is set to 20, dir and sd restarted

> Optionally and depending on your needs: Client global setting,

is set to 20 at the server and client side, fd restarted

> cliant-specific setting in DIR, job-specific setting.
> Note that there is *no* setting for the storage device specific
> setting in the SD.

Is it possible I hit a bug here? Could something else have an impact on 
concurrency?

Best regards,
   Andreas


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Radek Hladik
Hi,

Arno Lehmann napsal(a):
> Hi,
> 
> 07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
>> Radek Hladik schrieb:
>>> Till now I came up with this ideas:
>>> * Backup catalog and bootstrap files with the data
>>> * Disable jobs/files/volumes autopruning
>>> * maybe modify some livecd to contain current version of bacula or at
>>> least bscan (do not know, maybe such a CD exists)
>>> * Create SQL query to list which jobs is on which tape and print it on
>>> the paper with the tapes
>>>
>>> Do you think this is enough or am I overseeing something?
> 
> I didn't actually read your original mail, but going from the subject, 
> this sounds good already. Having the job lists on paper is definitely 
> helpful sometimes, but I'd mainly make sure to know where the current 
> catalog dump is stored, though. The catalog has all the information 
> you will need, readily accessible by Bacula. If you only have your 
> printout, you'll probably need some time for bscanning, but 
> bextracting the catalog dump, loading it and starting Bacula itself 
> might be faster.

I'm planning to have catalog backup on last tape or on some other media
(flashdisk, cd?) with the backups, so only bscan of last tape would be
needed at worst. But what I am a little affraid of is: I restore catalog
and bacula says: "Wow, such an old catalog, you forget to disable XY
type pruning, I am going to delete old files from catalog" :-)

> 
>> since no one has answered this yet, I feel free to voice a blind guess:
>> doesn't bacula have some kind of "archive" flag for volumes for
>> precisely this reason? I seem to remember something like that from the
>> documentation.
> 
> Well, the something includes a "not implemented" remark, unfortunately :-)

I know and thus I didn't mention it in my previous email. And it still
is not what I'm looking for as it should delete files from catlaog.

> 
> But you don't really need that, anyway: Just make sure your long term 
> volumes are not automatically pruned and you're already half the way 
> where you want to go... the other half of the way is usually deciding 
> if you need complete file lists for your archives (then you probably 
> want to set up separate job and client entries for these, so you can 
> have different retention times for normal production backups) or if 
> the fact that the data exists and on which volumes it's stored is 
> enough (Then just make sure the jobs are not pruned from the catalog).
> 

> For real long-term storage, you'll have to find ways to move data to 
> new tapes from time to time, probably keeping the catalog up to date, 
> and so on, so that you can restore when the original tapes can't any 
> longer be read. Migration might be helpful for this, but that's a 
> different story...

I am thinking of year or two, maximum three years. And after this period
the backups will be done again completely. There will meybe be the need
to recycle really old backups but it should be done manually.

> 
> Arno
> 
> 
>> Regards
>> Mike Follwerk
>>
> 
Radek


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule fullbackup every 4 weeks on saturday

2007-08-07 Thread mikee
On Tue, 07 Aug 2007, Benjamin E. Zeller might have said:

> Hi, 
> 
> this is rather urgent, can't figure it out via the docs.
> 
> How can I schedule a fullbackup, that runs every 4 weeks on saturday?
> 
> Schedule {
>   Name = "red"
>   Run = Level=Incremental mon-sat at 22:00
>   Run = Level=Full sun at 22:00
> }
> 
> 
> The 2nd Run= should be the schedule mentioned above.

I have mine scheduled on the first Saturday of each month.

Mike

# When to do the backups, full backup on first monday of the month,
#  differential (i.e. incremental since full) every other monday,
#  and incremental backups other days. The on-line backups run every
#  day. On Thursdays (21 Jul 07) JP uses the tape drive to make his
#  own (Basis Software) backups. On that day only are physical tapes
#  not made
Schedule {
  Name = Daily-Backup
  Run = Full 1st sat at 20:00
  Run = Differential 2nd-5th sat at 20:00
  Run = Incremental sun-fri at 22:00
}

# This schedule does the catalog. It starts after the Weekly-Cycle
Schedule {
  Name = After-Daily-Backup
  Run = Full sun-sat at 22:30
}


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule fullbackup every 4 weeks on saturday

2007-08-07 Thread Paul Muster
Benjamin E. Zeller wrote:

> this is rather urgent, can't figure it out via the docs.

Sure you could. ;-)

> How can I schedule a fullbackup, that runs every 4 weeks on saturday?
> 
> Schedule {
>   Name = "red"
>   Run = Level=Incremental mon-sat at 22:00
>   Run = Level=Full sun at 22:00
> }
> 
> 
> The 2nd Run= should be the schedule mentioned above.

I think you do not really want every 4 weeks but once a month. Take a 
look at the examples on 
http://bacula.org/rel-manual/Configuring_Director.html#SECTION00145
 
where you find

Schedule {
   Name = "MonthlyCycle"
   Run = Level=Full Pool=Monthly 1st sun at 1:05
 ^^^
   Run = Level=Differential 2nd-5th sun at 1:05
   Run = Level=Incremental Pool=Daily mon-sat at 1:05
}


Greetings,

Paul

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RESOLVED: volumes in other magazines always marked as "ERROR" state when using vchanger

2007-08-07 Thread Mike Follwerk - T²BF
Salute,

thought this might be interesting to you, Carsten, and maybe others who
use vchanger from the Bacula Removable Disk Howto.

The issue described in my previous mails has been resolved. Granted, I
am an idiot.

The Howto states the following device section in the storage daemon
example configuration:

Device {
  Name = 
  DriveIndex = 
  Autochanger = yes;
  DeviceType = File
  MediaType = File
  ArchiveDevice = 
  RemovableMedia = no;
  RandomAccess = yes;
}


Of course, it must say "RemovableMedia = yes;" instead. Otherwise bacula
cannot know that the magazine is removable. doh.
Works like a charm now.

Best Regards
Mike Follwerk

-- 

T²BF IT Services GbR
Daniel Blömer, Mike Follwerk, Marcus Teske
Marie-Curie-Str. 1
D-53359 Rheinbach

Umsatzsteuer-Identifikationsnummer
gem. § 27 a Umsatzsteuergesetz:DE 238268154

Tel. +492226 / 87 21 40
Fax: +492226 / 87 21 49
http://www.tbf-it.de/

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula error - poller_output.MYI

2007-08-07 Thread Monstad



Your disk space suggestion was right on the money - thanks! 

Turns out the bacula directory in this server is sym-linked to another on a
partition that was full (not 100% familiar with this server having recently
inherited it).  

The old .db file is working as normal after clearing some space. 

Many thanks,
Kris




Arno Lehmann wrote:
> 
> Hi,
> 
> 07.08.2007 11:43,, Monstad wrote::
>> 
>> Hi there,
>> 
>> For the past day bacula has been unresponsive eg: not mounting tapes, not
>> purging records etc.
>> 
>> Today I got this mail: 
>> 
>> 
>> 
>> 
>> *cacti.poller_output
>> error: Can't find file: 'poller_output.MYI' (errno: 2)
>> cacti.poller_time
>> warning  : Table is marked as crashed
>> error: Found key at page 1024 that points to record outside datafile
>> error: Corrupt
>> syslog.logs
>> warning  : Table is marked as crashed
>> error: Size of indexfile is: 5218078720Should be: 5218079744
>> error: Corrupt
>> 
>>  Improperly closed tables are also reported if clients are accessing
>>  the tables *now*. A list of current connections is below.
>> 
>> ++--+---++-+--+---+--+
>> | Id | User | Host  | db | Command | Time | State | Info 
>>
>> |
>> ++--+---++-+--+---+--+
>> | 52 | debian-sys-maint | localhost || Query   | 0|   | show
>> processlist |
>> ++--+---++-+--+---+--+
>> Uptime: 4  Threads: 1  Questions: 379  Slow queries: 0  Opens: 255  Flush
>> tables: 1  Open tables: 64  Queries per second avg: 94.750*
> 
> Your MySQL is databases are broken. MySQL does write a log file where 
> it reports errors, and the system log might also have valuable 
> information. Shut down MySQL and run the related tools to check and 
> repair the databases. There is information in the MySQL manual. One of 
> the more common problems, in my experience, is the infamous "disk 
> full" one, by the way :-)
> 
> As the problem is not directly related to Bacula, I can only recommend 
> to ask for more detailed help in the MySQL-related mailing lists, 
> forums, or newsgroups.
> 
> Once you've got the database running again and checked the 
> Bacula-related ones with dbcheck, I'm sure we can help you restoring 
> your data or getting Bacula running again.
> 
> Arno
> 
> 
>> Can anyone suggest how to repair this? I'm afraid I'm relatively new to
>> bacula (and I apologise if this is a documented problem).
>> 
>> I would be extremely grateful for any advice,
>> 
>> Kris
>> 
>> 
> 
> -- 
> Arno Lehmann
> IT-Service Lehmann
> www.its-lehmann.de
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/bacula-error---poller_output.MYI-tf4229177.html#a12032484
Sent from the Bacula - Users mailing list archive at Nabble.com.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Schedule fullbackup every 4 weeks on saturday

2007-08-07 Thread Benjamin E. Zeller
Hi, 

this is rather urgent, can't figure it out via the docs.

How can I schedule a fullbackup, that runs every 4 weeks on saturday?

Schedule {
  Name = "red"
  Run = Level=Incremental mon-sat at 22:00
  Run = Level=Full sun at 22:00
}


The 2nd Run= should be the schedule mentioned above.

Greetings,

Benni
-- 
Benjamin E. Zeller
Ing.-Büro Hohmann
Bahnhofstr. 34
D-82515 Wolfratshausen

Tel.:  +49 (0)8171 347 88 12
Mobil: +49 (0)160 99 11 55 23
Fax:   +49 (0)8171 910 778
mailto: [EMAIL PROTECTED]

www.ibh-wor.de


pgpQM01AFCyxB.pgp
Description: PGP signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule fullbackup every 4 weeks on saturday

2007-08-07 Thread Falk Sauer
Hi Benjamin,

On Tuesday 07 August 2007 writes Benjamin E. Zeller:

> this is rather urgent, can't figure it out via the docs.

urgent is a really bad word on any mailing lists ...

> How can I schedule a fullbackup, that runs every 4 weeks on saturday?
>
> Schedule {
>   Name = "red"
>   Run = Level=Incremental mon-sat at 22:00
>   Run = Level=Full sun at 22:00
> }
>
>
> The 2nd Run= should be the schedule mentioned above.

have a look on Message-Id: <[EMAIL PROTECTED]> in this List.

or read the manual from the newest beta releases if its realized in any 
version after the last release. 

regards
   Falk


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error while using Rescue CD

2007-08-07 Thread Mostafa Itani
Dear All,

 

I was able to compile the rescue CD and was able to burn it.

When booting using this CD everything works fine and I am able to bring the
network up, but when using the partition script I am getting errors, using
fdisk, and not able to partition my hard disk.

The error I am getting: I don't know how to handle files with mode 81a4".

 

I am running linux el5  and I have a hardware RAID controller. I am not able
to format, not to partition/ mount my hard disks, so I can continue my
restore.

I have gone through the function partition reading the code, but did not
find any special thing to edit.

 

Any suggestions?

 

Best regards,

Mostafa Itani

System Administrator

American University of Beirut
College Hall B2

P.O. Box 11-0236 Beirut, Lebanon

Tel:  +961 1 35 - Ext: 3518

Cell: +961 3 811972

Web: www.aub.edu.lb

 

 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] wan backup

2007-08-07 Thread Alexandre Chapellon
Hy

I want to use bacula to make backup of data spread all over the internet...
This works great when I can manage the two sides of the jobs (client and
server). But I encounter problems involving firewalls with some clients
(because it's normally the director's job to contact the fd, which is behind
a firewall).
I read few times ago that launching backup with the fd-client was on the
roadmap of bacula!
Is it already an available functionality?
Can I use it to make regular backup (automatic scheduled backup)?
Can I use to simply make one time backup?
What would be the restriction?
And overall is it bacula's job to do such thing?
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wan backup

2007-08-07 Thread Dan Langille
On 7 Aug 2007 at 14:08, Alexandre Chapellon wrote:

> Hy
> 
> I want to use bacula to make backup of data spread all over the internet...
> This works great when I can manage the two sides of the jobs (client and
> server). But I encounter problems involving firewalls with some clients
> (because it's normally the director's job to contact the fd, which is behind
> a firewall).
> I read few times ago that launching backup with the fd-client was on the
> roadmap of bacula!
> Is it already an available functionality?

No

> Can I use it to make regular backup (automatic scheduled backup)?
> Can I use to simply make one time backup?
> What would be the restriction?
> And overall is it bacula's job to do such thing?

I think all the other questions are moot given the answer above.

What you need to do is punch holes through the firewall for your 
Director.

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wan backup

2007-08-07 Thread Dan Langille
On 7 Aug 2007 at 14:22, Alexandre Chapellon wrote:

> 2007/8/7, Dan Langille <[EMAIL PROTECTED]>:
> >
> > On 7 Aug 2007 at 14:08, Alexandre Chapellon wrote:
> >
> > > Hy
> > >
> > > I want to use bacula to make backup of data spread all over the
> > internet...
> > > This works great when I can manage the two sides of the jobs (client and
> > > server). But I encounter problems involving firewalls with some clients
> > > (because it's normally the director's job to contact the fd, which is
> > behind
> > > a firewall).
> > > I read few times ago that launching backup with the fd-client was on the
> > > roadmap of bacula!
> > > Is it already an available functionality?
> >
> > No
> 
> 
> 
> Is it sitll in the roadmap?
> if yes does anyone knows when it will be available ? ( May I help if I don't
> know  anything about C or C++ :p)

It is Item 3 listed here: http://www.bacula.org/?page=projects

AFAIK, it is not being worked on at the moment.

> 
> > Can I use it to make regular backup (automatic scheduled backup)?
> > > Can I use to simply make one time backup?
> > > What would be the restriction?
> > > And overall is it bacula's job to do such thing?
> >
> > I think all the other questions are moot given the answer above.
> >
> > What you need to do is punch holes through the firewall for your
> > Director.
> 
> 
> I can't I don't manage firewall... I thought about setting up VPN but it's
> painfull for end users and I involves the VPN connection has to be up when
> the directors tries to connect!

Well, sounds like you're out of options.

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Concurrent restores - strange behaviour

2007-08-07 Thread Ruben Lopez
Hi,

Before submitting a bug, I want to discuss here on the list, to be sure 
that this is a bug, and not a misunderstanding of how bacula works.
I have bacula (2.0.1) configured to do concurrent jobs, and it works 
very well. The only thing that isn't perfect is the restore job. I have 
a default restore job, with a random pool, random storage, etc. The 
problem is that when bacula is about to restore something, it uses the 
concurrency configuration of this random storage (which on my case was a 
storage with a limit of 1 concurrent job). Shouldn't bacula use the 
concurrency configuration of the storage that it will actually use for 
the restore?

I have a tape drive with a configuration of 1 concurrent job, and a hard 
disk with a configuration of 4 concurrent jobs. Should I configure 
bacula with two restore jobs to be able to maintain this concurrency 
policy also when restoring?

Thanks in advance.


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Jobs Simultaneous

2007-08-07 Thread Marcos de Jesus Faria
Good morning

I´m new in Bacula List and I have a server with FreeBSD+Bacula it´s works very 
well..

My question is :

My server don´t do backup simultaneous on the same disk. For exemplo :

Bacula Server SD has a job for 2 more servers (server1 and server2) on the same 
hour. In this case my SD wait for server1 end
service to start server2 start.

How can I do it work together on the same time ?

Thanks a lot.


Sds, 
Marcos de Jesus Faria 
Tecnologia da Informação 
TEL.11-6482-8200 R:300 
[EMAIL PROTECTED] 
www.pompom.com.br



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs Simultaneous

2007-08-07 Thread Junior Cunha
Bacula Server SD has a job for 2 more servers (server1 and server2) on 
the same hour. In this case my SD wait for server1 end
> service to start server2 start.
>
> How can I do it work together on the same time ?
>
> Thanks a lot.
Marcos, are you using the same "device" for these jobs? If you create a 
device for each job you can perform simultaneous tasks in the same disk. 
Of course, don't forget to check "Maximum Concurrent Jobs" directive in 
DIR, FD and SD.

[]s

Junior Cunha

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: bacula-fd

2007-08-07 Thread Peter Buschman


Hi John / Tanveer:

You need to add /usr/ccs/bin to your PATH.  Even if you have gcc 
installed elsewhere, it is best to use Sun's native linking tools.


--PLB

At 10:47 7.8.2007, John Drescher wrote:



-- Forwarded message --
From: John Drescher <[EMAIL PROTECTED]>
Date: Aug 7, 2007 4:46 AM
Subject: Re: [Bacula-users] bacula-fd
To: tanveer haider <[EMAIL PROTECTED]>



On 8/7/07, tanveer haider <[EMAIL PROTECTED]> wrote:
I am receiving error while make, the file is attached that contain 
the make result.

thanks
tanveer


The problem is:

make[1]: ar: Command not foundAlthough I am no solaris expert, I 
believe 
http://www.mail-archive.com/[EMAIL PROTECTED]/msg09581.html 
may help.


John





--
John M. Drescher
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Attila Fülöp
Andreas Kopecki wrote:
> Hi Arno!
> 
> On Tuesday 07 August 2007, Arno Lehmann wrote:
>> 07.08.2007 11:54,, Andreas Kopecki wrote::
>>> On Tuesday 07 August 2007, John Drescher wrote:
 On 8/7/07, Andreas Kopecki <[EMAIL PROTECTED]> wrote:
> I tried to get concurrent jobs running, but somehow I couldn't get it
> going,
> although I double checked every concurrency option available.
 Do you have the Maximum Concurrent Jobs in at least 3 places in the
 bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
 the Storage and each client in that bacula-dir.conf file as well as the
 bacula-fd.conf on the clients and bacula-sd.conf too.
>>> The Maximum Concurrent Jobs in the sd and the client conf is set to 20...
>>> so there shouldn't be a limitation there, also. So, do you have any other
>>> clues?
>> In the DIR configuration, there is not only the overall concurrency
>> limit. In the storage resource there you have to set the "Maximum
>> Concurrent Jobs", too. So you need at least
>> DIR configuration, global setting
>> DIR configuration, storage resource
> 
> I set everything in the configuration files, and the director seems to have 
> them picked up (show xxx) as can be seen from my first mail:
> 
> Storage: name=GRAU MaxJobs=5
> Director: name=hadrian-dir MaxJobs=5
> client: name=visper-fd MaxJobs=20
> 
> And several jobs derived from a single JobDef
> Job: name=VISPER-XXX.Backup-Full JobType=66 level=Full MaxJobs=4 Spool=1
> 
>> SD configuration, global setting
> 
> is set to 20, dir and sd restarted
> 
>> Optionally and depending on your needs: Client global setting,
> 
> is set to 20 at the server and client side, fd restarted
> 
>> cliant-specific setting in DIR, job-specific setting.
>> Note that there is *no* setting for the storage device specific
>> setting in the SD.
> 
> Is it possible I hit a bug here? Could something else have an impact on 
> concurrency?

Yes, the  "Priority = xx" settings in the job resources must be the same for
all jobs to run concurrently. Otherwise they will wait for higher priority
jobs.

> Best regards,
>Andreas
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 
> 


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Schedule fullbackup every 4 weeks on saturday

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 13:27,, Falk Sauer wrote::
> Hi Benjamin,
> 
> On Tuesday 07 August 2007 writes Benjamin E. Zeller:
> 
>> this is rather urgent, can't figure it out via the docs.
> 
> urgent is a really bad word on any mailing lists ...

Indeed :-)

But anyway...

>> How can I schedule a fullbackup, that runs every 4 weeks on saturday?
>>
>> Schedule {
>>   Name = "red"
>>   Run = Level=Incremental mon-sat at 22:00
>>   Run = Level=Full sun at 22:00
>> }
>>
>>
>> The 2nd Run= should be the schedule mentioned above.
> 
> have a look on Message-Id: <[EMAIL PROTECTED]> in this List.

Good advice, but there are at least two options for cases where you 
*really* need the four-weekly schedule. Not that I really believe that 
  usually, you do *not* need such a schedule.

Option one:
Run= ... w01, w05, w09, w13, ...
This is a nightmare to manage for many clients when the year changes, 
though...

Option two:
Use a run before job script that gets fed the client name, job name 
and backup level, does some math to decide if it's a "4th week", and 
decides if the backup should run. Return with an error code of 1 to 
indicate the job should not run. Note that you don't want this with 
"rerun failed levels" set to yes, probably ;-)

Option 2.5:
Do the above decisions in  a python event triggered before job start 
and set the backup level from the event handler. I always wanted to 
try this, but never got around to actually working on it...

Arno


> or read the manual from the newest beta releases if its realized in any 
> version after the last release. 
> 
> regards
>Falk
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 13:17,, Andreas Kopecki wrote::
...
 Do you have the Maximum Concurrent Jobs in at least 3 places in the
 bacula-dir.conf? It goes in the main bacula-dir.conf settings as well as
 the Storage and each client in that bacula-dir.conf file as well as the
 bacula-fd.conf on the clients and bacula-sd.conf too.
>>> The Maximum Concurrent Jobs in the sd and the client conf is set to 20...
>>> so there shouldn't be a limitation there, also. So, do you have any other
>>> clues?
>> In the DIR configuration, there is not only the overall concurrency
>> limit. In the storage resource there you have to set the "Maximum
>> Concurrent Jobs", too. So you need at least
>> DIR configuration, global setting
>> DIR configuration, storage resource
> 
> I set everything in the configuration files, and the director seems to have 
> them picked up (show xxx) as can be seen from my first mail:
> 
> Storage: name=GRAU MaxJobs=5
> Director: name=hadrian-dir MaxJobs=5
> client: name=visper-fd MaxJobs=20
> 
> And several jobs derived from a single JobDef
> Job: name=VISPER-XXX.Backup-Full JobType=66 level=Full MaxJobs=4 Spool=1
> 
>> SD configuration, global setting
> 
> is set to 20, dir and sd restarted
> 
>> Optionally and depending on your needs: Client global setting,
> 
> is set to 20 at the server and client side, fd restarted
> 
>> cliant-specific setting in DIR, job-specific setting.
>> Note that there is *no* setting for the storage device specific
>> setting in the SD.
> 
> Is it possible I hit a bug here? Could something else have an impact on 
> concurrency?

I doubt there is a bug, at least similar setups run quite well in many 
locations.

Perhaps you have different priorities.

Anyway, we will need more detailed information for further help... 
either a complete job output from 'show job=...' for two jobs that 
should run in parallel, but don't, or the complete job setup from the 
configuration file.

Arno

> Best regards,
>Andreas
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long term backup with bacula

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 13:27,, Radek Hladik wrote::
> Hi,
> 
> Arno Lehmann napsal(a):
>> Hi,
>>
>> 07.08.2007 11:39,, Mike Follwerk - T²BF wrote::
>>> Radek Hladik schrieb:
 Till now I came up with this ideas:
 * Backup catalog and bootstrap files with the data
 * Disable jobs/files/volumes autopruning
 * maybe modify some livecd to contain current version of bacula or at
 least bscan (do not know, maybe such a CD exists)
 * Create SQL query to list which jobs is on which tape and print it on
 the paper with the tapes

 Do you think this is enough or am I overseeing something?
>> I didn't actually read your original mail, but going from the subject, 
>> this sounds good already. Having the job lists on paper is definitely 
>> helpful sometimes, but I'd mainly make sure to know where the current 
>> catalog dump is stored, though. The catalog has all the information 
>> you will need, readily accessible by Bacula. If you only have your 
>> printout, you'll probably need some time for bscanning, but 
>> bextracting the catalog dump, loading it and starting Bacula itself 
>> might be faster.
> 
> I'm planning to have catalog backup on last tape or on some other media
> (flashdisk, cd?)

If your complete catalog dump fits onto one of these (perhaps 
compressed), that would make recreating the catalog simpler.

I typically recommend to send the catalog dump to another file server 
in the network, which reduces the risk to lose all catalog dumps 
greatly, and is really easily done.

> with the backups, so only bscan of last tape would be
> needed at worst.

Ok, you seem to know the major points already :-)

> But what I am a little affraid of is: I restore catalog
> and bacula says: "Wow, such an old catalog, you forget to disable XY
> type pruning, I am going to delete old files from catalog" :-)

Obviously, that can happen, and has happend... There are two ways to 
avoid that problem. Either make sure no pruning takes place until your 
restores are completed. This is more or less impossible, though. Or, 
and that is the better approach, make sure the necessary data will 
never be pruned.

Thus my recommendations to set up extra job and client definitions for 
the archival backups. These would include longer retention times for 
jobs and files, but use the regular client addresses and file sets.

>>> since no one has answered this yet, I feel free to voice a blind guess:
>>> doesn't bacula have some kind of "archive" flag for volumes for
>>> precisely this reason? I seem to remember something like that from the
>>> documentation.
>> Well, the something includes a "not implemented" remark, unfortunately :-)
> 
> I know and thus I didn't mention it in my previous email. And it still
> is not what I'm looking for as it should delete files from catlaog.
> 
>> But you don't really need that, anyway: Just make sure your long term 
>> volumes are not automatically pruned and you're already half the way 
>> where you want to go... the other half of the way is usually deciding 
>> if you need complete file lists for your archives (then you probably 
>> want to set up separate job and client entries for these, so you can 
>> have different retention times for normal production backups) or if 
>> the fact that the data exists and on which volumes it's stored is 
>> enough (Then just make sure the jobs are not pruned from the catalog).
>>
> 
>> For real long-term storage, you'll have to find ways to move data to 
>> new tapes from time to time, probably keeping the catalog up to date, 
>> and so on, so that you can restore when the original tapes can't any 
>> longer be read. Migration might be helpful for this, but that's a 
>> different story...
> 
> I am thinking of year or two, maximum three years. And after this period
> the backups will be done again completely. There will meybe be the need
> to recycle really old backups but it should be done manually.

Tapes should be good after that time, if stored properly. And manually 
recycling them would always be possible.

Arno

>> Arno
>>
>>
>>> Regards
>>> Mike Follwerk
>>>
> Radek
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.co

[Bacula-users] RES: Jobs Simultaneous

2007-08-07 Thread Marcos de Jesus Faria
Thanks for answer.

In my FD, SD and DIR has "Maximum Concurrent Jobs = 20 ". I have to put it in 
Server1 FD and Server2 FD ? 

My backup on disk are in /backup all clients. Do you think that I have to do a 
/backup/server1 and /backup/server2 ?

Thanks

> -Mensagem original-
> De: Junior Cunha [mailto:[EMAIL PROTECTED]
> Enviada em: terça-feira, 7 de agosto de 2007 09:51
> Para: Marcos de Jesus Faria
> Cc: bacula-users@lists.sourceforge.net
> Assunto: Re: [Bacula-users] Jobs Simultaneous
> 
> Bacula Server SD has a job for 2 more servers (server1 and server2) on
> the same hour. In this case my SD wait for server1 end
> > service to start server2 start.
> >
> > How can I do it work together on the same time ?
> >
> > Thanks a lot.
> Marcos, are you using the same "device" for these jobs? If you create a
> device for each job you can perform simultaneous tasks in the same disk.
> Of course, don't forget to check "Maximum Concurrent Jobs" directive in
> DIR, FD and SD.
> 
> []s
> 
> Junior Cunha


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Andreas Kopecki
Hi Arno!

On Tuesday 07 August 2007, Arno Lehmann wrote:
> 07.08.2007 13:17,, Andreas Kopecki wrote::
> ...
>
>  Do you have the Maximum Concurrent Jobs in at least 3 places in the
>  bacula-dir.conf? It goes in the main bacula-dir.conf settings as well
>  as the Storage and each client in that bacula-dir.conf file as well as
>  the bacula-fd.conf on the clients and bacula-sd.conf too.
> >>>
> >>> The Maximum Concurrent Jobs in the sd and the client conf is set to
> >>> 20... so there shouldn't be a limitation there, also. So, do you have
> >>> any other clues?
> >>
> >> In the DIR configuration, there is not only the overall concurrency
> >> limit. In the storage resource there you have to set the "Maximum
> >> Concurrent Jobs", too. So you need at least
> >> DIR configuration, global setting
> >> DIR configuration, storage resource
> >
> > I set everything in the configuration files, and the director seems to
> > have them picked up (show xxx) as can be seen from my first mail:
> >
> > Storage: name=GRAU MaxJobs=5
> > Director: name=hadrian-dir MaxJobs=5
> > client: name=visper-fd MaxJobs=20
> >
> > And several jobs derived from a single JobDef
> > Job: name=VISPER-XXX.Backup-Full JobType=66 level=Full MaxJobs=4 Spool=1
> >
> >> SD configuration, global setting
> >
> > is set to 20, dir and sd restarted
> >
> >> Optionally and depending on your needs: Client global setting,
> >
> > is set to 20 at the server and client side, fd restarted
> >
> >> cliant-specific setting in DIR, job-specific setting.
> >> Note that there is *no* setting for the storage device specific
> >> setting in the SD.
> >
> > Is it possible I hit a bug here? Could something else have an impact on
> > concurrency?
>
> I doubt there is a bug, at least similar setups run quite well in many
> locations.
>
> Perhaps you have different priorities.
>
> Anyway, we will need more detailed information for further help...
> either a complete job output from 'show job=...' for two jobs that
> should run in parallel, but don't, or the complete job setup from the
> configuration file.

Here is the show job output for two jobs. Both share the same pool. I have 
jobs that use different pools, also, but that doesn't change anything.

Job: name=VISPER-RAID-home6.Backup-Full JobType=66 level=Full Priority=10 
Enabled=1
 MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 WritePartAfterJob=1
  --> Client: name=visper-fd address=visper.hlrs.de FDport=9102 MaxJobs=20
  JobRetention=1 month 19 days  FileRetention=2 months  AutoPrune=1
  --> Catalog: name=HLRS.Catalog address=*None* DBport=0 db_name=bacula
  db_user=bacula MutliDBConn=0
  --> FileSet: name=VISPER-RAID-home6.Set
  O M
  N
  I /mnt/raid/home/architekten
  I /mnt/raid/home/hpcdiche
  I /mnt/raid/home/hpczink
  I /mnt/raid/home/rusma
  N
  --> Schedule: name=WeeklyFull
  --> Run Level=Incremental
  hour=21 
  mday=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 
26 27 28 29 30 
  month=0 1 2 3 4 5 6 7 8 9 10 11 
  wday=1 2 3 4 
  wom=0 1 2 3 4 
  woy=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 
52 53 
  mins=55
  --> Run Level=Full
  hour=21 
  mday=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 
26 27 28 29 30 
  month=0 1 2 3 4 5 6 7 8 9 10 11 
  wday=5 
  wom=0 1 2 3 4 
  woy=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 
52 53 
  mins=55
  --> WriteBootstrap=/var/bacula/VISPER-RAID-home6.bsr
  --> Storage: name=GRAU address=hadrian.hlrs.de SDport=9103 MaxJobs=5
  DeviceName=GRAU MediaType=SDZ-130 StorageId=1
  --> Pool: name=GRAU-WF PoolType=Backup
  use_cat=1 use_once=0 cat_files=1
  max_vols=0 auto_prune=1 VolRetention=14 days 
  VolUse=0 secs recycle=1 LabelFormat=*None*
  CleaningPrefix=*None* LabelType=0
  RecyleOldest=0 PurgeOldest=0
  MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0
  MigTime=0 secs MigHiBytes=0 MigLoBytes=0
  --> Messages: name=Standard
  mailcmd=/usr/sbin/bsmtp  -f "(Bacula) %r" -s "Bacula: %t %e of %c %l" %r
  opcmd=/usr/sbin/bsmtp  -f "(Bacula) %r" -s "Bacula: Intervention needed 
for %j" %r
Job: name=VISPER-RAID-home7.Backup-Full JobType=66 level=Full Priority=10 
Enabled=1
 MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 WritePartAfterJob=1
  --> Client: name=visper-fd address=visper.hlrs.de FDport=9102 MaxJobs=20
  JobRetention=1 month 19 days  FileRetention=2 months  AutoPrune=1
  --> Catalog: name=HLRS.Catalog address=*None* DBport=0 db_name=bacula
  db_user=bacula MutliDBConn=0
  --> FileSet: name=VISPER-RAID-home7.Set
  O M
  N
  I /mnt/raid/home/hpcmtt
  N
  --> Schedule: name=WeeklyFull
  --> Run Level=Incremental
  hour=21 
  mday=0

Re: [Bacula-users] RES: Jobs Simultaneous

2007-08-07 Thread Junior Cunha
Marcos,
> Thanks for answer.
>
> In my FD, SD and DIR has "Maximum Concurrent Jobs = 20 ". I have to put it in 
> Server1 FD and Server2 FD ? 
You must set this directive in the FDs where you will have 
simultaneous connections at the same time. The default value is 2 
connections.


http://www.bacula.org/rel-manual/Client_Fi_daemon_Configura.html#ClientResource
> My backup on disk are in /backup all clients. Do you think that I have to do 
> a /backup/server1 and /backup/server2 ?
No. You can define 2 devices/storages pointing to the same disk. 
Below an exemple:

== SD ==
Device{
  Name = device_Server1
  Archive Device = "/backup"
  (...)
}

Storage{
  Name = storage_Server1
  Device = device_Server1
  (...)
}

Device{
  Name = device_Server2
  Archive Device = "/backup"
  (...)
}

Storage{
  Name = storage_Server2
  Device = device_Server2
  (...)
}

== DIR ===

Job {
  Name = job_Server1
  Client = Server1
  Storage = storage_Server1
  (...)
}

Job {
  Name = job_Server2
  Client = Server2
  Storage = storage_Server2
  (...)
}


[]s

Junior Cunha
>
> Thanks
>
>> -Mensagem original-
>> De: Junior Cunha [mailto:[EMAIL PROTECTED]
>> Enviada em: terça-feira, 7 de agosto de 2007 09:51
>> Para: Marcos de Jesus Faria
>> Cc: bacula-users@lists.sourceforge.net
>> Assunto: Re: [Bacula-users] Jobs Simultaneous
>>
>> Bacula Server SD has a job for 2 more servers (server1 and server2) on
>> the same hour. In this case my SD wait for server1 end
>>> service to start server2 start.
>>>
>>> How can I do it work together on the same time ?
>>>
>>> Thanks a lot.
>> Marcos, are you using the same "device" for these jobs? If you create a
>> device for each job you can perform simultaneous tasks in the same disk.
>> Of course, don't forget to check "Maximum Concurrent Jobs" directive in
>> DIR, FD and SD.
>>
>> []s
>>
>> Junior Cunha

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Getting Concurrent Jobs Running

2007-08-07 Thread Arno Lehmann
Hi,

07.08.2007 16:11,, Andreas Kopecki wrote::
...
>> Perhaps you have different priorities.
>>
>> Anyway, we will need more detailed information for further help...
>> either a complete job output from 'show job=...' for two jobs that
>> should run in parallel, but don't, or the complete job setup from the
>> configuration file.
> 
> Here is the show job output for two jobs. Both share the same pool. I have 
> jobs that use different pools, also, but that doesn't change anything.
> 
> Job: name=VISPER-RAID-home6.Backup-Full JobType=66 level=Full Priority=10 

Job priority 10

> Enabled=1
>  MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 WritePartAfterJob=1
>   --> Client: name=visper-fd address=visper.hlrs.de FDport=9102 MaxJobs=20

Client max jobs 20 (the DIRs perspective)

...
>   --> Storage: name=GRAU address=hadrian.hlrs.de SDport=9103 MaxJobs=5

Storage jobs simultaneously 5

...
> Job: name=VISPER-RAID-home7.Backup-Full JobType=66 level=Full Priority=10 

Identical priorities. Good.

> Enabled=1
>  MaxJobs=4 Resched=0 Times=0 Interval=1,800 Spool=1 WritePartAfterJob=1
>   --> Client: name=visper-fd address=visper.hlrs.de FDport=9102 MaxJobs=20

The same client... the FD configuration allows more than one 
simultaneous job?

What does 'sta client=visper-fd' in bconsole report?

...
>   --> Storage: name=GRAU address=hadrian.hlrs.de SDport=9103 MaxJobs=5

Ok, 5 again, same storage device... what's the output of 'sta sd=GRAU'?


I didn't notice anything. Unless you forgot a FD or SD configuration 
setting, or the necessary restart after these changes.

> Those are the corresponding Jobs/JobDefs:
> JobDefs {
>   Name = "FullBackup-Fast"
>   Type = Backup
>   Level = Full
>   Storage = GRAU
>   Messages = Standard
>   Pool = GRAU-WF
>   Priority = 10
>   Maximum Concurrent Jobs = 4
>   Spool Data = yes
>   Spool Attributes = no
>   Rerun Failed Levels = yes
> }
> 
> Job {
>   Name = "VISPER-RAID-home6.Backup-Full"
>   JobDefs = "FullBackup-Fast"
>   FileSet = "VISPER-RAID-home6.Set"
>   Write Bootstrap = "/var/bacula/VISPER-RAID-home6.bsr"
>   Client = visper-fd
>   Schedule = "WeeklyFull"
> }
> 
> Job {
>   Name = "VISPER-RAID-home7.Backup-Full"
>   JobDefs = "FullBackup-Fast"
>   FileSet = "VISPER-RAID-home7.Set"
>   Write Bootstrap = "/var/bacula/VISPER-RAID-home7.bsr"
>   Client = visper-fd
>   Schedule = "WeeklyFull"
> }

Ok, the jobs definitions look good. I suspect it's either the SD or FD 
stalling the jobs. The status outputs from above should have some 
details... hopefully.

Arno

> 
> Andreas
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Arno Lehmann
IT-Service Lehmann
www.its-lehmann.de

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slowdown after OS upgrade

2007-08-07 Thread Tod Hagan
On Thu, 2007-08-02 at 20:31 -0400, John Drescher wrote:
> > Below I've included some output from tapeinfo as the scsi tape driver is
> > the most likely source of the slowdown. I've tried 'mt -f /dev/nst0
> > stsetoptions buffer-writes async-writes' with no effect.
> >
> > I'd really appreciate some ideas on increasing the backup speeds to the
> > levels we used to get.
> >
> I doubt that the tape drive is the cause. Did you make any database changes?

Sort of. A more recent postgresql package was installed from scratch,
and the database restored via 'psql bacula < bacula.sql'.

Does the above method of restoring a postgresql database create indexes?

Thanks.

Tod

-- 
Tod Hagan
Information Technologist
AIRMAP/Climate Change Research Center
Institute for the Study of Earth, Oceans, and Space
University of New Hampshire
Durham, NH 03824
Phone: 603-862-3116



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slowdown after OS upgrade

2007-08-07 Thread Dan Langille
On 7 Aug 2007 at 11:57, Tod Hagan wrote:

> On Thu, 2007-08-02 at 20:31 -0400, John Drescher wrote:
> > > Below I've included some output from tapeinfo as the scsi tape driver is
> > > the most likely source of the slowdown. I've tried 'mt -f /dev/nst0
> > > stsetoptions buffer-writes async-writes' with no effect.
> > >
> > > I'd really appreciate some ideas on increasing the backup speeds to the
> > > levels we used to get.
> > >
> > I doubt that the tape drive is the cause. Did you make any database changes?
> 
> Sort of. A more recent postgresql package was installed from scratch,
> and the database restored via 'psql bacula < bacula.sql'.
> 
> Does the above method of restoring a postgresql database create indexes?

It will recreate any indexes that were present in the original 
database.  Running "vacuum analyse" from within psql will help 
refresh the table statistics which are used during query planning.

-- 
Dan Langille - http://www.langille.org/
Available for hire: http://www.freebsddiary.org/dan_langille.php



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] wan backup

2007-08-07 Thread Joshua J. Kugler
On Tuesday 07 August 2007 04:08, Alexandre Chapellon wrote:
> Hy
>
> I want to use bacula to make backup of data spread all over the internet...
> This works great when I can manage the two sides of the jobs (client and
> server). But I encounter problems involving firewalls with some clients
> (because it's normally the director's job to contact the fd, which is
> behind a firewall).
> I read few times ago that launching backup with the fd-client was on the
> roadmap of bacula!
> Is it already an available functionality?
> Can I use it to make regular backup (automatic scheduled backup)?
> Can I use to simply make one time backup?
> What would be the restriction?
> And overall is it bacula's job to do such thing?

Take a look at the SSH tunnel scripts in the docs.  You can initiate an SSH 
tunnel before the job starts, and then then connect over that tunnel to the 
client, or have the client intiate a tunnel back to the storage director, and 
then connect over that tunnel.  I've done it. Works great.  And my situation 
was similar to your: no control over the firewall.

j

-- 
Joshua Kugler   
Lead System Admin -- Senior Programmer
http://www.eeinternet.com
PGP Key: http://pgp.mit.edu/  ID 0xDB26D7CE
PO Box 80086 -- Fairbanks, AK 99708 -- Ph: 907-456-5581 Fax: 907-456-3111

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Dealing with failed/missed jobs

2007-08-07 Thread Charles Sprickman
Hi all,

I'm having some trouble figuring out how to "catch up" when someone has 
forgotten to put a tape in or if I manually schedule a job that requires a 
different pool than what is in the tape.

I think a real-world example is in order.  My fulls are on the first 
weekend of the month, diffs each subsequent weekend, then incrementals on 
weekdays.  No one is in the office sat/sun to change tapes.

This past weekend I mistakenly asked for a tape from the weekly pool to be 
inserted.  Unfortunately, I had forgotten this was a new month.  So on 
Sunday afternoon when bacula was going to do a run, it wanted to do a Full 
and it wanted a tape from the Monthly pool.  No one was around, so the 
jobs did not start.

Monday I asked for someone to put in the next Monthly tape, but then that 
night bacula wanted a Daily.

This is where I get confused.  If a job fails simply due to the wrong 
tape, how do I make bacula re-run the job and run it to the appropriate 
pool?  If I let this slide, is bacula simply going to wait until the first 
weekend in September to do a full run?  I'd really like to get one in 
ASAP.

This sort of mishandling of tapes will likely not be a one-time occurence, 
plus there's issues of people going on vacation and similar where there 
will be no operator on site to swap tapes.  How do other people deal with 
this?  What happens to these failed jobs in the catalog?  Should they be 
deleted?  Is there a way to reschedule them all?

Another thing that I have not figured out is how to see what bacula thinks 
it's next run will be (what hosts, what level, what pool).  I'd like to 
know this for troubleshooting purposes as well as to try and script 
something to give people an advance warning about what tape should be in 
the drive each night.

And lastly, any plans to have the spool act like it does in Amanda? 
Meaning that if you have the space and you don't have the right tape in, 
bacula will spool all the jobs until the right tape ends up in the drive. 
Or perhaps it is possible in some way that I'm not seeing.

Any help is appreciated, we're very happy so far with bacula but for this 
little issue of our sneakernet changer not being 100% reliable. :)

Thanks,

Charles

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dealing with failed/missed jobs

2007-08-07 Thread Robert LeBlanc



On 8/7/07 5:32 PM, "Charles Sprickman" <[EMAIL PROTECTED]> wrote:

> Hi all,
> 
> I'm having some trouble figuring out how to "catch up" when someone has
> forgotten to put a tape in or if I manually schedule a job that requires a
> different pool than what is in the tape.
> 
> I think a real-world example is in order.  My fulls are on the first
> weekend of the month, diffs each subsequent weekend, then incrementals on
> weekdays.  No one is in the office sat/sun to change tapes.
> 
> This past weekend I mistakenly asked for a tape from the weekly pool to be
> inserted.  Unfortunately, I had forgotten this was a new month.  So on
> Sunday afternoon when bacula was going to do a run, it wanted to do a Full
> and it wanted a tape from the Monthly pool.  No one was around, so the
> jobs did not start.
> 
> Monday I asked for someone to put in the next Monthly tape, but then that
> night bacula wanted a Daily.
> 
> This is where I get confused.  If a job fails simply due to the wrong
> tape, how do I make bacula re-run the job and run it to the appropriate
> pool?  If I let this slide, is bacula simply going to wait until the first
> weekend in September to do a full run?  I'd really like to get one in
> ASAP.

I've just run the jobs manually and modified the job to be the right level
and the right pool. Kind of a pain sometimes when a lot of jobs fail (we
have almost 30 clients). I would be interested in a batch restart too.

> This sort of mishandling of tapes will likely not be a one-time occurence,
> plus there's issues of people going on vacation and similar where there
> will be no operator on site to swap tapes.  How do other people deal with
> this?  What happens to these failed jobs in the catalog?  Should they be
> deleted?  Is there a way to reschedule them all?
> 
> Another thing that I have not figured out is how to see what bacula thinks
> it's next run will be (what hosts, what level, what pool).  I'd like to
> know this for troubleshooting purposes as well as to try and script
> something to give people an advance warning about what tape should be in
> the drive each night.

You can do a status on the director and it will tell you all that info in
the top portion of the screen except the pool (it does tell you the tape it
thinks it will use which can change if the tape fills up)

> And lastly, any plans to have the spool act like it does in Amanda?
> Meaning that if you have the space and you don't have the right tape in,
> bacula will spool all the jobs until the right tape ends up in the drive.
> Or perhaps it is possible in some way that I'm not seeing.

That is a cool feature that would be pretty nifty. We have a changer so it's
not as big a del, but it sounds like a great feature

> Any help is appreciated, we're very happy so far with bacula but for this
> little issue of our sneakernet changer not being 100% reliable. :)
> 
> Thanks,
> 
> Charles
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 

 
Robert LeBlanc
College of Life Sciences Computer Support
Brigham Young University
[EMAIL PROTECTED]
(801)422-1882



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] clarification: VolumeToCatalog and DiskToCatalog verify jobs

2007-08-07 Thread Troy Daniels
Hi,

>  
>> I run my VolumeToCatalog verify jobs with my backup server specified as 
>> the client to minimise Network impact for this reason.
> 
> So the Client option doesn't have to be the same as the client name
> that was used for the backup? I've never thought about this. 
>  

Neither had I at first :-)
> 
> Now I've to find out what was going wrong with the two verify jobs of
> my full backups from last weekend (mail from yesterday). 
> 
> The inc/diff verify jobs before were ok and the verify job of the inc
> backup from this night too. Is there a way to rerun the verify job of
> the full backup (sunday) after the next verify job (last night)?
> 

Unfortunately not - a verify job will only verify the last run job, and 
you cant specify a previous job.

Troy.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users