Re: [Bacula-users] Chained copy job

2012-04-05 Thread Uwe Schuerkamp
On Mon, Apr 02, 2012 at 08:20:17PM +0200, Dennis Hoppe wrote:
> Hello,
> 
> is it possible to use chained copy jobs? For example i would like to
> copy my full backups from local disk to usb disk and after that to an
> nas storage.
> 

Hi Dennis, 

I see your definition below is lacking the "SQLQuery" for the
"Selection Type", might this be part of the problem? 

All the best, 

Uwe 

> Job {
>   Name = "backup-all"
>   JobDefs = "DefaultBackup"
>   Client = backup-fd
>   FileSet = "backup-all"
>   Storage = backup
>   Full Backup Pool = backup-monthly
>   Incremental Backup Pool = backup-daily
>   Differential Backup Pool = backup-weekly
> }
> 
> Job {
>   Name = "backup-copy-monthly-usb"
>   JobDefs = "DefaultCopy"
>   Client = backup-fd
>   FileSet = "backup-all"
>   Schedule = "MonthlyCopy"
>   Storage = backup
>   Pool = backup-monthly
>   Selection Pattern = "
> SELECT max(jobid)
> FROM job
> WHERE name = 'backup-all'
> AND type = 'B'
> AND level = 'F'
> AND jobstatus = 'T';"
> }
> 
> Job {
>   Name = "backup-copy-monthly-nas"
>   JobDefs = "DefaultCopy"
>   Client = backup-fd
>   FileSet = "backup-all"
>   Schedule = "MonthlyCopy2"
>   Storage = backup
>   Pool = backup-monthly
>   Selection Pattern = "
> SELECT max(jobid)
> FROM job
> WHERE name = 'backup-copy-monthly-usb'
> AND type = 'c'
> AND level = 'F'
> AND jobstatus = 'T';"
> }
> 
> Pool {
>   Name = backup-monthly
>   Pool Type = Backup
>   Recycle = yes
>   RecyclePool = scratch
>   AutoPrune = yes
>   ActionOnPurge = Truncate
>   Volume Retention = 2 months
>   Volume Use Duration = 23 hours
>   LabelFormat =
> "backup-full_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
>   Next Pool = backup-monthly-usb
> }
> 
> Pool {
>   Name = backup-monthly-usb
>   Storage = backup-usb
>   Pool Type = Backup
>   Recycle = yes
>   RecyclePool = scratch
>   AutoPrune = yes
>   ActionOnPurge = Truncate
>   Volume Retention = 2 months
>   Volume Use Duration = 23 hours
>   LabelFormat =
> "backup-full_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
>   Next Pool = backup-monthly-nas
> }
> 
> Pool {
>   Name = backup-daily-nas
>   Storage = backup-nas
>   Pool Type = Backup
>   Recycle = yes
>   RecyclePool = scratch
>   AutoPrune = yes
>   ActionOnPurge = Truncate
>   Volume Retention = 7 days
>   Volume Use Duration = 23 hours
>   LabelFormat =
> "backup-incr_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}-${Hour:p/2/0/r}:${Minute:p/2/0/r}"
> }
> 
> If i run the sql statement from "backup-copy-monthly-nas" by hand, the
> correct jobid is selected which should get the "read storage", "write
> storage" and "next pool" from the job "backup-copy-monthly-usb".
> Unfortunatly, bacula ignores the sql statement and is getting the jobid
> from "backup-all" which will end in an duplicate copy at the storage
> "backup-usb". :(
> 
> Did i something wrong or is not bacula able to use two different "next
> pools" / "storages"?
> 
> Regards, Dennis
> 



> --
> This SF email is sponsosred by:
> Try Windows Azure free for 90 days Click Here 
> http://p.sf.net/sfu/sfd2d-msazure
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
uwe.schuerk...@nionex.net fon: [+49] 5242.91 - 4740, fax:-69 72
Hauptsitz: Avenwedder Str. 55, D-33311 Gütersloh, Germany
Registergericht Gütersloh HRB 4196, Geschäftsführer: H. Gosewehr, D. Suda
NIONEX --- Ein Unternehmen der Bertelsmann AG



--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Chained copy job

2012-04-05 Thread Dennis Hoppe
Hello Uwe,

Am 05.04.2012 10:47, schrieb Uwe Schuerkamp:
> I see your definition below is lacking the "SQLQuery" for the
> "Selection Type", might this be part of the problem? 

the "Selection Type" is defined at the following "JobDefs". I read
somwhere that i have to use a "Selection Type" instead of
"PoolUncopiedJobs", because it does not set a "priorjobid".

JobDefs {
  Name = "DefaultCopy"
  Type = Copy
  Selection Type = SQL Query
  Level = Incremental
  Client = backup-fd
  FileSet = "backup-all"
  Schedule = "DailyCopy"
  Storage = backup-usb
  Messages = Standard
  Pool = backup-daily
  Allow Duplicate Jobs = yes
  Allow Higher Duplicates = no
  Cancel Queued Duplicates = no
  Priority = 20
  Write Bootstrap = "/var/lib/bacula/%c.bsr"
}

Maybe i should give you a more verbose example.

The last backup of my Job "backup-all" looks like:
 jobid |name| type | level | priorjobid
---++--+---+
  1123 | backup-all | B| F |  0

The last copy of my Job "backup-copy-month-usb" looks like:
 jobid |  name   | type | level | priorjobid
---+-+--+---+
  1139 | backup-copy-monthly-usb | c| F |  0

The current values of my Job "backup-copy-monthly-nas" would look like:
JobName:   backup-copy-monthly-nas
Bootstrap: *None*
Client:backup-fd
FileSet:   backup-all
Pool:  backup-monthly (From Job resource)
Read Storage:  backup (From Job resource)
Write Storage: backup-usb (From Storage from Pool's NextPool resource)
JobId: *None*
When:  2012-04-05 11:54:16
Catalog:   MyCatalog
Priority:  20

At this point the "Pool" and "Read Storage" should be used from the
"JobId" "1139" and point via "NextPool" to "backup-monthly-nas". But in
this case the correct "JobId" is selected by bacula, but the wrong Pools
/ Storages are used.

05-Apr 12:02 backup-dir JobId 1324: The following 1 JobId was chosen to
be copied: 1139
05-Apr 12:02 backup-dir JobId 1324: Copying using JobId=1139
Job=backup-copy-monthly-usb.2012-04-01_06.05.00_51
05-Apr 12:02 backup-dir JobId 1324: No files found to read. No bootstrap
file written.
05-Apr 12:02 backup-dir JobId 1324: Previous Job has no data to copy.
05-Apr 12:02 backup-dir JobId 1324: Bacula backup-dir 5.0.2 (28Apr10):
05-Apr-2012 12:02:44
  Build OS:   i486-pc-linux-gnu debian squeeze/sid
  Prev Backup JobId:  1139
  Prev Backup Job:backup-copy-monthly-usb.2012-04-01_06.05.00_51
  New Backup JobId:   0
  Current JobId:  1324
  Current Job:backup-copy-monthly-nas.2012-04-05_12.02.42_18
  Backup Level:   Full
  Client: backup-fd
  FileSet:"backup-all" 2012-02-22 23:05:00
  Read Pool:  "backup-monthly" (From Job resource)
  Read Storage:   "backup" (From Job resource)
  Write Pool: "backup-monthly-usb" (From Job Pool's NextPool
resource)
  Write Storage:  "backup-usb" (From Storage from Pool's
NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 05-Apr-2012 12:02:44
  End time:   05-Apr-2012 12:02:44
  Elapsed time:   0 secs
  Priority:   20
  SD Files Written:   0
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Volume name(s):
  Volume Session Id:  0
  Volume Session Time:0
  Last Volume Bytes:  0 (0 B)
  SD Errors:  0
  SD termination status:
  Termination:Copying -- no files to copy

I am wondering how Bacula is getting the "NextPool", because the
Database shows nothing and why Bacula thinks there is nothing to copy.

 poolid |name| nextpoolid
++
  4 | backup-monthly |  0
  5 | backup-daily   |  0
  6 | backup-weekly  |  0
 50 | backup-monthly-usb |  0
 51 | backup-daily-usb   |  0
 52 | backup-weekly-usb  |  0
 89 | backup-monthly-nas |  0
 90 | backup-daily-nas   |  0
 91 | backup-weekly-nas  |  0
(9 Zeilen)

Regards, Dennis



signature.asc
Description: OpenPGP digital signature
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Chained copy job

2012-04-05 Thread Uwe Schuerkamp
On Thu, Apr 05, 2012 at 12:30:43PM +0200, Dennis Hoppe wrote:
> Hello Uwe,

> the "Selection Type" is defined at the following "JobDefs". I read
> somwhere that i have to use a "Selection Type" instead of
> "PoolUncopiedJobs", because it does not set a "priorjobid".
> 

Hi Dennis, 

sorry I must have overlooked that bit. Thanks to your previous example
I have now been able to define a fine-grained copy job for one of my
pools as I wasn't even aware of the SQLQuery Selection type ;-) 

I'm happy to report that in my case, everything worked as expected so
checked your config for any obvious boo-boos (I think you can tell by
now that my bacula expertise is still very superficial, even after
five years of admin'ing multiple deployments). 


> I am wondering how Bacula is getting the "NextPool", because the
> Database shows nothing and why Bacula thinks there is nothing to copy.
> 
>  poolid |name| nextpoolid
> ++
>   4 | backup-monthly |  0
>   5 | backup-daily   |  0
>   6 | backup-weekly  |  0
>  50 | backup-monthly-usb |  0
>  51 | backup-daily-usb   |  0
>  52 | backup-weekly-usb  |  0
>  89 | backup-monthly-nas |  0
>  90 | backup-daily-nas   |  0
>  91 | backup-weekly-nas  |  0

Just out of curiousity, are all your online / disk based pools living
in the same directory? 

All the best, Uwe 




-- 
NIONEX --- Ein Unternehmen der Bertelsmann AG



--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Chained copy job

2012-04-05 Thread Dennis Hoppe
Hello Uwe,

Am 05.04.2012 12:38, schrieb Uwe Schuerkamp:
>> the "Selection Type" is defined at the following "JobDefs". I read
>> somwhere that i have to use a "Selection Type" instead of
>> "PoolUncopiedJobs", because it does not set a "priorjobid".
> 
> sorry I must have overlooked that bit. Thanks to your previous example
> I have now been able to define a fine-grained copy job for one of my
> pools as I wasn't even aware of the SQLQuery Selection type ;-) 
> 
> I'm happy to report that in my case, everything worked as expected so
> checked your config for any obvious boo-boos (I think you can tell by
> now that my bacula expertise is still very superficial, even after
> five years of admin'ing multiple deployments). 

maybe you could send me your config files? Which distribution and
version are you using?

>> I am wondering how Bacula is getting the "NextPool", because the
>> Database shows nothing and why Bacula thinks there is nothing to copy.
>>
>>  poolid |name| nextpoolid
>> ++
>>   4 | backup-monthly |  0
>>   5 | backup-daily   |  0
>>   6 | backup-weekly  |  0
>>  50 | backup-monthly-usb |  0
>>  51 | backup-daily-usb   |  0
>>  52 | backup-weekly-usb  |  0
>>  89 | backup-monthly-nas |  0
>>  90 | backup-daily-nas   |  0
>>  91 | backup-weekly-nas  |  0
> 
> Just out of curiousity, are all your online / disk based pools living
> in the same directory? 

I have one directory for my storages (disk, usb disk, nas) and beneath
one directory for each host which will contain the volumes for my full,
differential and incremental volumes.

Regards, Dennis



signature.asc
Description: OpenPGP digital signature
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Chained copy job

2012-04-05 Thread Uwe Schuerkamp
On Thu, Apr 05, 2012 at 12:47:02PM +0200, Dennis Hoppe wrote:

> maybe you could send me your config files? Which distribution and
> version are you using?
> 

Hello Dennis, 

I'm running Bacula 5.2.6 compiled from source on CentOS 6.2 (64bit)
with a MySQL backend. 

Here's the relevant config for the pool I'm copying to tape: 

Pool {
  Name = Online_full
  Pool Type = Backup
  Storage = FileStorage_full
  Recycle = yes
  AutoPrune = yes   
  Volume Retention = 180 days
  Purge Oldest Volume = yes
  Recycle  Oldest Volume = yes
  Maximum Volumes = 28
  Maximum Volume Bytes = 195G
  Label Format ="full-"
  Next  Pool = "Offline"
}

Pool {
  Storage = lto4
  Name = Offline
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 365 days
  Purge Oldest Volume = yes
  Recycle  Oldest Volume = yes
  Maximum Volumes = 21
  Label Format ="Offline-"
}

and here's my copy job definition for the full backups completed
within the last 7 days: 

Job {
  Name = "copy-full"
  Type = Copy
  Level = Full
  Priority = 15  
  Pool = Online_full
  Messages = Standard
  Maximum Concurrent Jobs = 2
  Schedule = Copy-full # runs once a week for now 
  Selection Type = SQLQuery
  Selection Pattern = "
SELECT jobid
FROM Job
WHERE PoolId = 4
AND type = 'B'  
AND level = 'F' 
AND jobstatus = 'T' 
AND EndTime >= SUBDATE(NOW(), INTERVAL 7 DAY)   
GROUP BY Name;"

}

I'm restricting the copy jobs artificially for now because we have run
the full pool for quite a while and I've only recently started messing
around with copy jobs, so for this pool I only want jobs completed in
the last week to be copied over to tape. I bet there's a more clever
way to implement this, but your SQLQuery thingy turned out to be a
godsend for the task at hand, although it could probably done with
AllowDuplicateSomething or other in a more supported and elegant
fashion.

All the best, Uwe 



-- 
NIONEX --- Ein Unternehmen der Bertelsmann AG



--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Chained copy job

2012-04-05 Thread Dennis Hoppe
Hello Uwe,

Am 05.04.2012 13:08, schrieb Uwe Schuerkamp:
> On Thu, Apr 05, 2012 at 12:47:02PM +0200, Dennis Hoppe wrote:
>> maybe you could send me your config files? Which distribution and
>> version are you using?
> 
> I'm running Bacula 5.2.6 compiled from source on CentOS 6.2 (64bit)
> with a MySQL backend. 
> 
> Here's the relevant config for the pool I'm copying to tape: 
> 
> Pool {
>   Name = Online_full
>   Pool Type = Backup
>   Storage = FileStorage_full
>   Recycle = yes
>   AutoPrune = yes   
>   Volume Retention = 180 days
>   Purge Oldest Volume = yes
>   Recycle  Oldest Volume = yes
>   Maximum Volumes = 28
>   Maximum Volume Bytes = 195G
>   Label Format ="full-"
>   Next  Pool = "Offline"
> }
> 
> Pool {
>   Storage = lto4
>   Name = Offline
>   Pool Type = Backup
>   Recycle = yes
>   AutoPrune = yes
>   Volume Retention = 365 days
>   Purge Oldest Volume = yes
>   Recycle  Oldest Volume = yes
>   Maximum Volumes = 21
>   Label Format ="Offline-"
> }
> 
> and here's my copy job definition for the full backups completed
> within the last 7 days: 
> 
> Job {
>   Name = "copy-full"
>   Type = Copy
>   Level = Full
>   Priority = 15  
>   Pool = Online_full
>   Messages = Standard
>   Maximum Concurrent Jobs = 2
>   Schedule = Copy-full # runs once a week for now 
>   Selection Type = SQLQuery
>   Selection Pattern = "   
>  
> SELECT jobid  
>   
> FROM Job  
>   
> WHERE PoolId = 4  
>   
> AND type = 'B'
>   
> AND level = 'F'   
>   
> AND jobstatus = 'T'   
>   
> AND EndTime >= SUBDATE(NOW(), INTERVAL 7 DAY) 
>   
> GROUP BY Name;"
> 
> }
> 
> I'm restricting the copy jobs artificially for now because we have run
> the full pool for quite a while and I've only recently started messing
> around with copy jobs, so for this pool I only want jobs completed in
> the last week to be copied over to tape. I bet there's a more clever
> way to implement this, but your SQLQuery thingy turned out to be a
> godsend for the task at hand, although it could probably done with
> AllowDuplicateSomething or other in a more supported and elegant
> fashion.

i am also able to copy the volume from "backup-monthly" to
"backup-monthly-usb", but i also want to copy the volume from
"backup-monthly-usb" to "backup-monthly-nas".

backup-monthly -> backup-monthly-usb -> backup-monthly-nas
(primary) (fallback)(different building)

I think Bacula is not able to recognize the second "NextPool" at
"backup-monthly-usb", so it says something like "Hey, i already have
copied your volume from 'backup-monthly' to 'backup-monthly-usb'. There
is nothing else to do".

Regards, Dennis



signature.asc
Description: OpenPGP digital signature
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] parallelizing jobs

2012-04-05 Thread Wolfgang Denk

Hi,

I wonder why I see situations that a client is waiting for another job
to complete, that is only despooling, i. e. that does not block any
resources on the client:

 53100 Increme   SD despooling Data
 53101 Increme   is waiting on max Client jobs

This is with bacula 5.0.3 as distributed with Fedora 15.

Settings:

- in FD and SD: Maximum Concurrent Jobs = 20
- in Job:   Maximum Concurrent Jobs = 6
- in Client:Maximum Concurrent Jobs = 1

I am aware that I'm limiting the number of jobs on the client to 1,
and this is intentional.  But the "SD despooling Data" is something
that involves the DIR and the SD only, so it should not block the
client from starting the next backup job.

Seems I'm missing something here.  Any ideas are highly welcome.

Thanks in advance.


Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
Severe culture shock results when experts from another protocol suite
[...] try to read OSI documents. The term "osified" is used to  refer
to  such  documents. [...] Any relationship to the word "ossified" is
purely intentional.- Marshall T. Rose

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread John Drescher
On Thu, Apr 5, 2012 at 8:18 AM, Wolfgang Denk  wrote:
>
> Hi,
>
> I wonder why I see situations that a client is waiting for another job
> to complete, that is only despooling, i. e. that does not block any
> resources on the client:
>
>  53100 Increme   SD despooling Data
>  53101 Increme   is waiting on max Client jobs
>
> This is with bacula 5.0.3 as distributed with Fedora 15.
>
> Settings:
>
> - in FD and SD: Maximum Concurrent Jobs = 20
> - in Job:       Maximum Concurrent Jobs = 6
> - in Client:    Maximum Concurrent Jobs = 1
>
> I am aware that I'm limiting the number of jobs on the client to 1,
> and this is intentional.  But the "SD despooling Data" is something
> that involves the DIR and the SD only, so it should not block the
> client from starting the next backup job.
>
> Seems I'm missing something here.  Any ideas are highly welcome.
>
> Thanks in advance.
>
Do you have any restrictions on how many jobs go per volume?

John

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual Full Backups Schedule Considerations

2012-04-05 Thread David Palmer
Hello,

I am looking at moving towards virtual full backups as a way to cut down 
on our backup times. The situation is that our full backups on several 
of our file servers are running over their window. My question is what 
could the draw backs be to moving towards virtual full backups?

I currently backup to disk and and copy to tape (full backups only) for 
archival purposes. My idea would be to do a virtual full backup to the 
disk and still copy that to tape. Are their any risks for the backups 
becoming corrupt? Should I still do periodic full backups or is anyone 
using this technique to do infinite incremental backups?

Thanks for your help,

David


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Abdullah Sofizada
Hi guys, this is a very weird one. I been trying to tackle this for the 
past two weeks or so to no avail...

My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are 
Redhat Rhel 5.5 running bacula 5.0.2.

Each of the bacula clients are less than 15 GB of data. Backups of each 
client were fine. But two weeks ago the backups for each of these 
clients ballooned to 550GB each!!

When I do a df -h... the servers only show 15GB of space used. The one 
difference I noticed in the past two weeks is...I added these servers to 
our new IPA domain. Which in essence is an ldap server using kerberos 
authentication for identity management. This server runs on Rhel 6.2.

I have many other clients which are not part of the IPA domain that are 
backing up just fine. So I'm sure it has something to do with this. I 
have even tried to remove my bacula clients out of the IPA domain, than 
ran a backup. But it still reports at 550GB of data being backed up.

I appreciate the help...



-- 
-Abdullah


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Steve Ellis
On 4/5/12 8:21 AM, Abdullah Sofizada wrote:
> Hi guys, this is a very weird one. I been trying to tackle this for the
> past two weeks or so to no avail...
>
> My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
> Redhat Rhel 5.5 running bacula 5.0.2.
>
> Each of the bacula clients are less than 15 GB of data. Backups of each
> client were fine. But two weeks ago the backups for each of these
> clients ballooned to 550GB each!!
>
> When I do a df -h... the servers only show 15GB of space used. The one
> difference I noticed in the past two weeks is...I added these servers to
> our new IPA domain. Which in essence is an ldap server using kerberos
> authentication for identity management. This server runs on Rhel 6.2.
>
> I have many other clients which are not part of the IPA domain that are
> backing up just fine. So I'm sure it has something to do with this. I
> have even tried to remove my bacula clients out of the IPA domain, than
> ran a backup. But it still reports at 550GB of data being backed up.
>
> I appreciate the help...
>
>
>
Complete guess: Does something that you added use one or more 'large' 
sparse files?  If so, either exclude those files from the backup, if 
they are say, log-type files, or turn on sparse file detection in the 
fileset (add sparse=yes to the fileset Options resource, as I recall).  
The only other thing I could think of is if you are using 'onefs=yes' 
and something introduced links such that most of your data is now being 
backed up multiple times.

-se

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Pablo Marques
Abdullah:

Make sure you have this in your fileset definition

Sparse = yes

Also you can do this in bconsole:

estimate job=client_job_whatever listing
 
it will print the list of files to be backed up.
Look for big files in the list.


Pablo
- Original Message -
From: "Abdullah Sofizada" 
To: bacula-users@lists.sourceforge.net
Sent: Thursday, April 5, 2012 11:21:37 AM
Subject: [Bacula-users] Backups increased to 500GB after adding to IPA domain

Hi guys, this is a very weird one. I been trying to tackle this for the 
past two weeks or so to no avail...

My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are 
Redhat Rhel 5.5 running bacula 5.0.2.

Each of the bacula clients are less than 15 GB of data. Backups of each 
client were fine. But two weeks ago the backups for each of these 
clients ballooned to 550GB each!!

When I do a df -h... the servers only show 15GB of space used. The one 
difference I noticed in the past two weeks is...I added these servers to 
our new IPA domain. Which in essence is an ldap server using kerberos 
authentication for identity management. This server runs on Rhel 6.2.

I have many other clients which are not part of the IPA domain that are 
backing up just fine. So I'm sure it has something to do with this. I 
have even tried to remove my bacula clients out of the IPA domain, than 
ran a backup. But it still reports at 550GB of data being backed up.

I appreciate the help...



-- 
-Abdullah


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread Wolfgang Denk
Dear John Drescher,

In message  
you wrote:
>
> Do you have any restrictions on how many jobs go per volume?

No.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
What is research but a blind date with knowledge?  -- Will Harvey

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread John Drescher
On Thu, Apr 5, 2012 at 2:19 PM, Wolfgang Denk  wrote:
> Dear John Drescher,
>
> In message 
>  you 
> wrote:
>>
>> Do you have any restrictions on how many jobs go per volume?
>
> No.

Is the same volume used by both clients? I mean you are not using a
different pool per client or something like that?

John

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup while job running?

2012-04-05 Thread Stephen Thompson
On 04/02/2012 03:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
 Well, we've made the leap from MyISAM to InnoDB, seems like we win on
 transactions, but lose on read speed.
>>>
>>> If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
>>> pool is probably too small.
>>
>> This is probably true, but I have limited system resources and my File
>> table is almost 300Gb large.
>
> Ah, well, sometimes there's only so much you can allocate.
>
>>> --skip-lock-tables is referred to in the mysqldump documentation, but
>>> isn't actually a valid option.  This is actually an increasingly
>>> horrible problem with mysqldump.  It has been very poorly maintained,
>>> and has barely developed at all in ten or fifteen years.
>>>
>>
>> This has me confused.  I have jobs that can run, and insert records into
>> the File table, while I am dumping the Catalog.  It's only at the
>> tail-end that a few jobs get the error above.  Wouldn't a locked File
>> table cause all concurrent jobs to fail?
>
> Hmm.  I stand corrected.  I've never seen it listed as an option in the
> man page, despite there being one reference to it, but I see that
> mysqldump --help does explain it even though the man page doesn't.
>
> In that case, the only thing I can think of is that you have multiple
> jobs trying to insert attributes at the same time and the last ones in
> line are timing out.
>


This appears to be the root cause.  After running a few more nights, the 
coincidence with the Catalog dump was not maintained.  It happens for a 
few jobs each night, at different times, different jobs, and sometimes 
when no Catalog dump is occurring.

I think it's simply that a bunch of batch inserts wind up running at the 
same time and the last in line run out of time.  Rather than setting my 
timeout arbitrarily large (10 minutes did not solve the problem), I am 
curious about what you say below.

> (Locking the table for batch attribute insertion actually isn't
> necessary; MySQL can be configured to interleave auto_increment inserts.
>   However, that's the way Bacula does it.)

Are you saying that if I turn on auto_increment inserts in MySQL, it 
won't matter whether or not bacula is asking for locks during batch 
inserts?  Or does bacula also need to be configured (patched) not to use 
locks during batch inserts?

And lastly, why does the bacula documentation claim that locks are 
'essential' for batch inserts and you claim they are not?

I'm surprised more folks running mysql InnoDB and bacula aren't having 
this problem since I stumbled upon it so easily.  :)  Perhaps the trend 
is MySQL MyISAM --> Postgres.


>
> Don't know that I have any helpful suggestions there, then...  sorry.
>
>
>

thanks!
Stephen
-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread mark . bergman
In the message dated: Thu, 05 Apr 2012 14:18:22 +0200,
The pithy ruminations from Wolfgang Denk on 
<[Bacula-users] parallelizing jobs> were:
=> 
=> Hi,
=> 
=> I wonder why I see situations that a client is waiting for another job
=> to complete, that is only despooling, i. e. that does not block any
=> resources on the client:

This has been discussed several times. Check the list archives for "concurrent
spooling":

https://www.google.com/search?q=bacula+mailing+list+concurrent+spooling

Unfortunately, there does not seem to be a solution.

Mark

=> 
=>  53100 Increme   SD despooling Data
=>  53101 Increme   is waiting on max Client jobs
=> 
=> This is with bacula 5.0.3 as distributed with Fedora 15.
=> 
=> Settings:
=> 
=> - in FD and SD:  Maximum Concurrent Jobs = 20
=> - in Job:   Maximum Concurrent Jobs = 6
=> - in Client:Maximum Concurrent Jobs = 1
=> 
=> I am aware that I'm limiting the number of jobs on the client to 1,
=> and this is intentional.  But the "SD despooling Data" is something
=> that involves the DIR and the SD only, so it should not block the
=> client from starting the next backup job.
=> 
=> Seems I'm missing something here.  Any ideas are highly welcome.
=> 
=> Thanks in advance.
=> 
=> 
=> Best regards,
=> 
=> Wolfgang Denk
=> 
=> -- 
=> DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
=> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
=> Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
=> Severe culture shock results when experts from another protocol suite
=> [...] try to read OSI documents. The term "osified" is used to  refer
=> to  such  documents. [...] Any relationship to the word "ossified" is
=> purely intentional.- Marshall T. Rose
=> 
=> 
--
=> Better than sec? Nothing is better than sec when it comes to
=> monitoring Big Data applications. Try Boundary one-second 
=> resolution app monitoring today. Free.
=> http://p.sf.net/sfu/Boundary-dev2dev
=> ___
=> Bacula-users mailing list
=> Bacula-users@lists.sourceforge.net
=> https://lists.sourceforge.net/lists/listinfo/bacula-users
=> 



--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backups increased to 500GB after adding to IPA domain

2012-04-05 Thread Abdullah Sofizada
Thanks for the help guys. You are right it was this file right here:

/var/log/lastlog... I will exclude it from our backups.

Your the best thanks again!


On 4/5/2012 1:03 PM, Pablo Marques wrote:
> Abdullah:
>
> Make sure you have this in your fileset definition
>
> Sparse = yes
>
> Also you can do this in bconsole:
>
> estimate job=client_job_whatever listing
>
> it will print the list of files to be backed up.
> Look for big files in the list.
>
>
> Pablo
> - Original Message -
> From: "Abdullah Sofizada"
> To: bacula-users@lists.sourceforge.net
> Sent: Thursday, April 5, 2012 11:21:37 AM
> Subject: [Bacula-users] Backups increased to 500GB after adding to IPA domain
>
> Hi guys, this is a very weird one. I been trying to tackle this for the
> past two weeks or so to no avail...
>
> My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
> Redhat Rhel 5.5 running bacula 5.0.2.
>
> Each of the bacula clients are less than 15 GB of data. Backups of each
> client were fine. But two weeks ago the backups for each of these
> clients ballooned to 550GB each!!
>
> When I do a df -h... the servers only show 15GB of space used. The one
> difference I noticed in the past two weeks is...I added these servers to
> our new IPA domain. Which in essence is an ldap server using kerberos
> authentication for identity management. This server runs on Rhel 6.2.
>
> I have many other clients which are not part of the IPA domain that are
> backing up just fine. So I'm sure it has something to do with this. I
> have even tried to remove my bacula clients out of the IPA domain, than
> ran a backup. But it still reports at 550GB of data being backed up.
>
> I appreciate the help...
>
>
>


-- 
-Abdullah


--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalog backup while job running?

2012-04-05 Thread Phil Stracchino
On 04/05/2012 02:41 PM, Stephen Thompson wrote:
> On 04/02/2012 03:33 PM, Phil Stracchino wrote:
>> (Locking the table for batch attribute insertion actually isn't
>> necessary; MySQL can be configured to interleave auto_increment inserts.
>>   However, that's the way Bacula does it.)
> 
> Are you saying that if I turn on auto_increment inserts in MySQL, it 
> won't matter whether or not bacula is asking for locks during batch 
> inserts?  Or does bacula also need to be configured (patched) not to use 
> locks during batch inserts?

You would have to patch Bacula to not issue an explicit LOCK TABLE.  It
does not at present contain any option to not lock the table during
inserts.  This is something I've meant to experiment with myself for
some time, to compare performance, but haven't managed to get to it.  It
hasn't been a high priority for me since I don't ever have more than
about five or six jobs running, and they basically never finish up at
the same time anyway.

> And lastly, why does the bacula documentation claim that locks are 
> 'essential' for batch inserts and you claim they are not?

Basically, if interleaved mode is NOT enabled, multiple batch inserts
will contend for access to the table, since even if Bacula does not lock
the table, InnoDB will set a global AUTO_INC lock on the table any time
you attempt to insert an indeterminate number of rows into a table
containing an auto_increment field.

(In this context, actually, on InnoDB, having Bacula issue a LOCK TABLE
is redundant; InnoDB is going to lock the table anyway until the thread
is done inserting rows.)

Setting innodb_autoinc_lock_mode = 2 enables multiple threads to
interleave inserts into the same table and guarantees that they will
still get unique auto_increment row IDs; it simply does not guarantee
that the IDs allocated to any given thread will be consecutive.
However, I'm pretty sure Bacula is not written sufficiently incorrectly
to care whether it gets consecutive row IDs.  :)


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-05 Thread Joe Nyland
Hi,

I've been using Bacula for a while now and I have a backup procedure in place 
for my MySQL databases, where I perform a full (dump) backup nightly, then 
incremental (bin log) backups every hour through the day to capture changes.

I basically have a script which I have written which is run as a 'RunBeforeJob' 
from backup and runs either a mysqldump if the backup level is full, or flushes 
the bin logs if the level is incremental.

I'm in the process of performing some test restores from these backups, as I 
would like to know the procedure is working correctly.

I have no issue restoring the files from Bacula, however I'm having some issues 
restoring my catalog MySQL database from the binary logs created by MySQL. 
Specifically, I am getting messages like:

ERROR 1146 (42S02) at line 105: Table 'bacula.batch' doesn't exist

when I try to replay my log files against the database after it's been restore 
from the dump file. As far as I know the batch table is a temporary table 
created when inserting file attributes into the catalog during/after a backup 
job. I would have hoped, however, the creation of this table would have been 
included in either my database/earlier in my bin log.

I believe this may be related to another thread on the list at the moment 
titled "Catalog backup while job running?" as this is, in effect what I am 
doing - a full database dump whilst other jobs are running, but my reason for 
creating a new thread is that I am not getting any errors in my backup jobs, as 
the OP of the other thread is - I'm simply having issues rebuilding my database 
after restoring the said full dump.

I would like to know if anyone is currently backing up their catalog database 
in such a way, and if so how they are overcoming this issue when restoring. My 
reason for backing up my catalog using binary logging is so that I can perform 
a point-in-time recovery of the catalog, should I loose it.

Any input anyone can offer would be greatly appreciated.

Thanks,

Joe
--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-05 Thread Stephen Thompson
On 04/05/2012 02:27 PM, Joe Nyland wrote:
> Hi,
>
> I've been using Bacula for a while now and I have a backup procedure in place 
> for my MySQL databases, where I perform a full (dump) backup nightly, then 
> incremental (bin log) backups every hour through the day to capture changes.
>
> I basically have a script which I have written which is run as a 
> 'RunBeforeJob' from backup and runs either a mysqldump if the backup level is 
> full, or flushes the bin logs if the level is incremental.
>
> I'm in the process of performing some test restores from these backups, as I 
> would like to know the procedure is working correctly.
>
> I have no issue restoring the files from Bacula, however I'm having some 
> issues restoring my catalog MySQL database from the binary logs created by 
> MySQL. Specifically, I am getting messages like:
>
>   ERROR 1146 (42S02) at line 105: Table 'bacula.batch' doesn't exist
>
> when I try to replay my log files against the database after it's been 
> restore from the dump file. As far as I know the batch table is a temporary 
> table created when inserting file attributes into the catalog during/after a 
> backup job. I would have hoped, however, the creation of this table would 
> have been included in either my database/earlier in my bin log.
>
> I believe this may be related to another thread on the list at the moment 
> titled "Catalog backup while job running?" as this is, in effect what I am 
> doing - a full database dump whilst other jobs are running, but my reason for 
> creating a new thread is that I am not getting any errors in my backup jobs, 
> as the OP of the other thread is - I'm simply having issues rebuilding my 
> database after restoring the said full dump.
>
> I would like to know if anyone is currently backing up their catalog database 
> in such a way, and if so how they are overcoming this issue when restoring. 
> My reason for backing up my catalog using binary logging is so that I can 
> perform a point-in-time recovery of the catalog, should I loose it.
>


I am not running a catalog backup in that way, but have thought about it.

You're correct that the batch tables are temporary tables created so 
that jobs can do batch inserts of the file attributes.

I did run into a similar problem to yours when I had a MySQL slave 
server out of sync with the master.  The slave (much like your restore) 
was reading through binlogs to catch up and ran into a line that 
referred to a batch table, which didn't exist.  In my case, it didn't 
exist because the slave never saw an earlier line that created the 
temporary batch table.

I would imagine something similar is going on with your restore, where 
you are not actually applying all the changes since the Full dump (or 
did not capture all the changes since the Full dump), because somewhere 
you should have a line in your binlogs that create the batch table 
before other lines refer to and try to use it.

Also, keep in mind that theses temporary batch tables are owned by 
threads, so if you start looking through your binlogs, you'll see many 
references to bacula.batch, but they are not all referring to the same 
table.  Each thread is able to have it's own bacula.batch table.


Stephen







> Any input anyone can offer would be greatly appreciated.
>
> Thanks,
>
> Joe
> --
> Better than sec? Nothing is better than sec when it comes to
> monitoring Big Data applications. Try Boundary one-second
> resolution app monitoring today. Free.
> http://p.sf.net/sfu/Boundary-dev2dev
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-05 Thread Joe Nyland
On 5 Apr 2012, at 22:37, Stephen Thompson wrote:

> On 04/05/2012 02:27 PM, Joe Nyland wrote:
>> Hi,
>> 
>> I've been using Bacula for a while now and I have a backup procedure in 
>> place for my MySQL databases, where I perform a full (dump) backup nightly, 
>> then incremental (bin log) backups every hour through the day to capture 
>> changes.
>> 
>> I basically have a script which I have written which is run as a 
>> 'RunBeforeJob' from backup and runs either a mysqldump if the backup level 
>> is full, or flushes the bin logs if the level is incremental.
>> 
>> I'm in the process of performing some test restores from these backups, as I 
>> would like to know the procedure is working correctly.
>> 
>> I have no issue restoring the files from Bacula, however I'm having some 
>> issues restoring my catalog MySQL database from the binary logs created by 
>> MySQL. Specifically, I am getting messages like:
>> 
>>  ERROR 1146 (42S02) at line 105: Table 'bacula.batch' doesn't exist
>> 
>> when I try to replay my log files against the database after it's been 
>> restore from the dump file. As far as I know the batch table is a temporary 
>> table created when inserting file attributes into the catalog during/after a 
>> backup job. I would have hoped, however, the creation of this table would 
>> have been included in either my database/earlier in my bin log.
>> 
>> I believe this may be related to another thread on the list at the moment 
>> titled "Catalog backup while job running?" as this is, in effect what I am 
>> doing - a full database dump whilst other jobs are running, but my reason 
>> for creating a new thread is that I am not getting any errors in my backup 
>> jobs, as the OP of the other thread is - I'm simply having issues rebuilding 
>> my database after restoring the said full dump.
>> 
>> I would like to know if anyone is currently backing up their catalog 
>> database in such a way, and if so how they are overcoming this issue when 
>> restoring. My reason for backing up my catalog using binary logging is so 
>> that I can perform a point-in-time recovery of the catalog, should I loose 
>> it.
>> 
> 
> 
> I am not running a catalog backup in that way, but have thought about it.
> 
> You're correct that the batch tables are temporary tables created so that 
> jobs can do batch inserts of the file attributes.
> 
> I did run into a similar problem to yours when I had a MySQL slave server out 
> of sync with the master.  The slave (much like your restore) was reading 
> through binlogs to catch up and ran into a line that referred to a batch 
> table, which didn't exist.  In my case, it didn't exist because the slave 
> never saw an earlier line that created the temporary batch table.
> 
> I would imagine something similar is going on with your restore, where you 
> are not actually applying all the changes since the Full dump (or did not 
> capture all the changes since the Full dump), because somewhere you should 
> have a line in your binlogs that create the batch table before other lines 
> refer to and try to use it.
> 
> Also, keep in mind that theses temporary batch tables are owned by threads, 
> so if you start looking through your binlogs, you'll see many references to 
> bacula.batch, but they are not all referring to the same table.  Each thread 
> is able to have it's own bacula.batch table.
> 
> 
> Stephen
> 
> 
>> Any input anyone can offer would be greatly appreciated.
>> 
>> Thanks,
>> 
>> Joe
> 
> 
> -- 
> Stephen Thompson   Berkeley Seismological Laboratory
> step...@seismo.berkeley.edu215 McCone Hall # 4760
> 404.538.7077 (phone)   University of California, Berkeley
> 510.643.5811 (fax) Berkeley, CA 94720-4760

Hi Stephen,

Thank you very much for your reply.

I agree that it seems the creation of the batch table is not being captured, 
for some reason.

As I think it may be useful, here's the line taken from my MySQL 'RunBeforeJob' 
script when the full backup is taken:

mysqldump --all-databases --single-transaction --delete-master-logs 
--flush-logs --master-data --opt -u ${DBUSER} -p${DBPASS} > 
${DST}/${HOST}_${DATE}_${TIME}.sql.dmp

Can you spot anything there which could cause the creation of this/these 
temporary tables to not be included in the bin log? I've spent a while getting 
this list of options right and I'm not 100% sure I've got the correct 
combination, but it's possible I've missed something here.

Thanks,

Joe
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread Wolfgang Denk
Dear John Drescher,

In message  
you wrote:
>
> >> Do you have any restrictions on how many jobs go per volume?
> >
> > No.
> 
> Is the same volume used by both clients? I mean you are not using a
> different pool per client or something like that?

All these jobs use the same pool, so both jobs and thus both clients
will use the same volume.  There are other jobs running in parallel,
too (but not as many as to run into the max job limit).

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
The only perfect science is hind-sight.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread Wolfgang Denk
Dear Mark,

In message <3158.1333653400@localhost> you wrote:
>
> => I wonder why I see situations that a client is waiting for another job
> => to complete, that is only despooling, i. e. that does not block any
> => resources on the client:
> 
> This has been discussed several times. Check the list archives for "concurrent
> spooling":
> 
>   https://www.google.com/search?q=bacula+mailing+list+concurrent+spooling

I'm not sure if this is the same problem.  In my case, there is
actualy no concurrency. The job on the client has terminated, the
client says it is running no jobs at this time.  Only SD and DIR
are still processing this job.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
"An open mind has but one disadvantage: it collects dirt."
- a saying at RPI

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-05 Thread Stephen Thompson
On 04/05/2012 03:19 PM, Joe Nyland wrote:
> On 5 Apr 2012, at 22:37, Stephen Thompson wrote:
>
>> On 04/05/2012 02:27 PM, Joe Nyland wrote:
>>> Hi,
>>>
>>> I've been using Bacula for a while now and I have a backup procedure in 
>>> place for my MySQL databases, where I perform a full (dump) backup nightly, 
>>> then incremental (bin log) backups every hour through the day to capture 
>>> changes.
>>>
>>> I basically have a script which I have written which is run as a 
>>> 'RunBeforeJob' from backup and runs either a mysqldump if the backup level 
>>> is full, or flushes the bin logs if the level is incremental.
>>>
>>> I'm in the process of performing some test restores from these backups, as 
>>> I would like to know the procedure is working correctly.
>>>
>>> I have no issue restoring the files from Bacula, however I'm having some 
>>> issues restoring my catalog MySQL database from the binary logs created by 
>>> MySQL. Specifically, I am getting messages like:
>>>
>>> ERROR 1146 (42S02) at line 105: Table 'bacula.batch' doesn't exist
>>>
>>> when I try to replay my log files against the database after it's been 
>>> restore from the dump file. As far as I know the batch table is a temporary 
>>> table created when inserting file attributes into the catalog during/after 
>>> a backup job. I would have hoped, however, the creation of this table would 
>>> have been included in either my database/earlier in my bin log.
>>>
>>> I believe this may be related to another thread on the list at the moment 
>>> titled "Catalog backup while job running?" as this is, in effect what I am 
>>> doing - a full database dump whilst other jobs are running, but my reason 
>>> for creating a new thread is that I am not getting any errors in my backup 
>>> jobs, as the OP of the other thread is - I'm simply having issues 
>>> rebuilding my database after restoring the said full dump.
>>>
>>> I would like to know if anyone is currently backing up their catalog 
>>> database in such a way, and if so how they are overcoming this issue when 
>>> restoring. My reason for backing up my catalog using binary logging is so 
>>> that I can perform a point-in-time recovery of the catalog, should I loose 
>>> it.
>>>
>>
>>
>> I am not running a catalog backup in that way, but have thought about it.
>>
>> You're correct that the batch tables are temporary tables created so that 
>> jobs can do batch inserts of the file attributes.
>>
>> I did run into a similar problem to yours when I had a MySQL slave server 
>> out of sync with the master.  The slave (much like your restore) was reading 
>> through binlogs to catch up and ran into a line that referred to a batch 
>> table, which didn't exist.  In my case, it didn't exist because the slave 
>> never saw an earlier line that created the temporary batch table.
>>
>> I would imagine something similar is going on with your restore, where you 
>> are not actually applying all the changes since the Full dump (or did not 
>> capture all the changes since the Full dump), because somewhere you should 
>> have a line in your binlogs that create the batch table before other lines 
>> refer to and try to use it.
>>
>> Also, keep in mind that theses temporary batch tables are owned by threads, 
>> so if you start looking through your binlogs, you'll see many references to 
>> bacula.batch, but they are not all referring to the same table.  Each thread 
>> is able to have it's own bacula.batch table.
>>
>>
>> Stephen
>>
>>
>>> Any input anyone can offer would be greatly appreciated.
>>>
>>> Thanks,
>>>
>>> Joe
>>
>>
>> --
>> Stephen Thompson   Berkeley Seismological Laboratory
>> step...@seismo.berkeley.edu215 McCone Hall # 4760
>> 404.538.7077 (phone)   University of California, Berkeley
>> 510.643.5811 (fax) Berkeley, CA 94720-4760
>
> Hi Stephen,
>
> Thank you very much for your reply.
>
> I agree that it seems the creation of the batch table is not being captured, 
> for some reason.
>
> As I think it may be useful, here's the line taken from my MySQL 
> 'RunBeforeJob' script when the full backup is taken:
>
>   mysqldump --all-databases --single-transaction --delete-master-logs 
> --flush-logs --master-data --opt -u ${DBUSER} -p${DBPASS}>  
> ${DST}/${HOST}_${DATE}_${TIME}.sql.dmp
>
> Can you spot anything there which could cause the creation of this/these 
> temporary tables to not be included in the bin log? I've spent a while 
> getting this list of options right and I'm not 100% sure I've got the correct 
> combination, but it's possible I've missed something here.
>

Sorry, I don't think I can be much help here.  I'm wrangling with 
mysqldump myself at the moment since I moved from MyISAM tables to 
InnoDB and the documentation is very poor.

Are you using InnoDB...  If not, I'm not sure why --single-transaction 
is there, and if so, I wonder if it shouldn't come after --opt.  The 
options order matter and since --opt is the default, having it

Re: [Bacula-users] parallelizing jobs

2012-04-05 Thread John Drescher
On Thu, Apr 5, 2012 at 6:24 PM, Wolfgang Denk  wrote:
> Dear John Drescher,
>
> In message 
>  you 
> wrote:
>>
>> >> Do you have any restrictions on how many jobs go per volume?
>> >
>> > No.
>>
>> Is the same volume used by both clients? I mean you are not using a
>> different pool per client or something like that?
>
> All these jobs use the same pool, so both jobs and thus both clients
> will use the same volume.  There are other jobs running in parallel,
> too (but not as many as to run into the max job limit).
>

I have not seen this however I do not look that closely. I know my
jobs do spool concurrently at least while the jobs are already
started. I use a 5GB spool.

John

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula MySQL Catalog binlog restore

2012-04-05 Thread Phil Stracchino
On 04/05/2012 06:46 PM, Stephen Thompson wrote:
> On 04/05/2012 03:19 PM, Joe Nyland wrote:
>> As I think it may be useful, here's the line taken from my MySQL
>> 'RunBeforeJob' script when the full backup is taken:
>> 
>> mysqldump --all-databases --single-transaction --delete-master-logs
>> --flush-logs --master-data --opt -u ${DBUSER} -p${DBPASS}>
>> ${DST}/${HOST}_${DATE}_${TIME}.sql.dmp
>> 
>> Can you spot anything there which could cause the creation of
>> this/these temporary tables to not be included in the bin log? I've
>> spent a while getting this list of options right and I'm not 100%
>> sure I've got the correct combination, but it's possible I've
>> missed something here.
>> 
> 
> Sorry, I don't think I can be much help here.  I'm wrangling with 
> mysqldump myself at the moment since I moved from MyISAM tables to 
> InnoDB and the documentation is very poor.
> 
> Are you using InnoDB...  If not, I'm not sure why
> --single-transaction is there, and if so, I wonder if it shouldn't
> come after --opt.  The options order matter and since --opt is the
> default, having it at the end of your line is only resetting anything
> you change earlier in the line back to the --opt defaults.

Since --opt is the default, there's no reason to ever explicitly specify
it at all in the first place.

And as we just discussed the other day, --single-transaction is
ineffective without either --skip-lock-tables, or --skip-opt and adding
back in the stuff from  --opt that you want.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] running script before job fails due to permission issue

2012-04-05 Thread Murray Davis
When logged on as root, I can run the following script to backup my MySQL
databases to a local folder:

#!/bin/bash

BACKUPLOCATION=/var/local/mysqlbackups
LOGFILE=/usr/local/sbin/backupdbs.log
GZIP="$(which gzip)"
NOW=$(date +"%d-%m-%Y")
RETENTION=30
#remove .gz files greater than 30 days old
find /var/local/mysqlbackups -mtime +$RETENTION -exec rm -fr {} \; &>
/dev/null
# back up all the mysql databases, into individual files so we can later
restore
# them separately if needed.
mysql --defaults-extra-file=/root/.my.cnf -B -N -e "show databases" | while
read db
do
   BACKUPFILE=$BACKUPLOCATION/mysql-${db}.${NOW}-$(date +"%T").gz
   echo "Backing up $db into $BACKUPFILE"
   /usr/bin/mysqldump --defaults-extra-file=/root/.my.cnf
--single-transaction $db | $GZIP -9 > $BACKUPFILE
done >>$LOGFILE

However, when I run the backup from bacula, I get the following error
message...

*The backup job is now running. When complete, the results will be shown
below ..*

05-Apr 16:18 cablemon-dir JobId 27: shell command: run BeforeJob
"/usr/local/sbin/backupdbs"
05-Apr 16:18 cablemon-dir JobId 27: BeforeJob: Could not open required
defaults file: /root/.my.cnf
05-Apr 16:18 cablemon-dir JobId 27: BeforeJob: Fatal error in defaults
handling. Program aborted
05-Apr 16:18 cablemon-dir JobId 27: BeforeJob: ERROR 1045 (28000):
Access denied for user 'bacula'@'localhost' (using password: NO)
05-Apr 16:18 cablemon-dir JobId 27: Start Backup JobId 27,
Job=BackupClient1.2012-04-05_16.18.54_33
05-Apr 16:18 cablemon-dir JobId 27: Using Device "FileStorage"
05-Apr 16:18 cablemon-sd JobId 27: Volume "Inc-0002" previously
written, moving to end of data.
05-Apr 16:18 cablemon-sd JobId 27: Ready to append to end of Volume
"Inc-0002" size=67985900
05-Apr 16:18 cablemon-fd JobId 27:  Could not stat
"/usr/sbin/local": ERR=No such file or directory
05-Apr 16:18 cablemon-sd JobId 27: Job write elapsed time = 00:00:01,
Transfer rate = 0  Bytes/second
05-Apr 16:18 cablemon-dir JobId 27: Bacula cablemon-dir 5.0.1
(24Feb10): 05-Apr-2012 16:18:57


The backup finishes ok, just not the script component. I first gave
read access to /root/.my.cnf for the account "bacula", but I still got
the error. I even set the permissions as 777 for .my.cnf and still I
got the above error. I am using the .my.cnf file to hide the mysql
username and password.
--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] running script before job fails due to permission issue

2012-04-05 Thread Phil Stracchino
On 04/05/2012 07:27 PM, Murray Davis wrote:
> When logged on as root, I can run the following script to backup my
> MySQL databases to a local folder:

[...]

> 05-Apr 16:18 cablemon-dir JobId 27: shell command: run BeforeJob 
> "/usr/local/sbin/backupdbs"
> 05-Apr 16:18 cablemon-dir JobId 27: BeforeJob: Could not open required 
> defaults file: /root/.my.cnf

What user does bacula run as?


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Schedule : First day of month

2012-04-05 Thread Nicolas
 

Hi all, 

I'm searching how make a schedule for "the first day of
january", the first day of feb-jun", "the first day of jul", the first
day of aug-dec". 

The syntaxe I used doesn't work : 

"jan 1st at
00:00am" 

Do you know what's the good one ? 

Thanks 
--


Nicolas
http://www.shivaserv.fr
 --
For Developers, A Lot Can Happen In A Second.
Boundary is the first to Know...and Tell You.
Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
http://p.sf.net/sfu/Boundary-d2dvs2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users