OK, I finally have it up and running although I do have some loose ends.

To run the script directly from the backup job is pretty much a chicken and egg 
story. The backup job keeps waiting for the script to be finished and the 
script (which calls mysqldump) keeps waiting for something to read the fifo. I 
tried the “set –m” option but this does not make any difference. I think the 
set –m option is set by default.

Kern wrote:
> Another possible problem is that perhaps Bacula waits for the script to 
> finish and in doing so also waits for all children of the script to 
> terminate.  In this case, using an Admin job that that starts before the 
> backup would resolve the problem.

So I tried the “Admin” job as indicated by Kern. This one works although I do 
have to make the script more bullet proof. What I noticed is that the Admin job 
does not support the “Client Run Before Job” tag, only the “Run Before Job”. If 
I use “Client Run Before Job” the script does not run. This means I can not 
have a setup where the database is running on the client. Is this done by 
design?

My first approach was to start the admin job from the actual backup job by 
invoking bconsole from the script from the backup job as follows:

    echo Initializing database backup for $database
    rm -f $fifo
    /usr/sbin/bconsole -c /etc/bacula/bconsole.conf <<END_OF_DATA
@output /dev/null
run job="MyDatabaseDump" yes
quit
END_OF_DATA
    i=0
    while [ ! -p $fifo ]; do
      let i=i+1
      echo waiting for fifo $i
      sleep 1
      if [ ! $i -lt 60 ]; then
        echo ERROR fifo not exists
        exit 1
      fi
    done


In the while loop it waits for the fifo to be created by the admin job. If it 
is not created within 60 seconds it exits with an error. This works if you 
invoke the job manually and no other jobs are running. In my setup however I 
have a number of jobs scheduled which run based on priority. If there is a job 
with lower priority scheduled after the backup job, the admin job will be 
placed at the end of the queue so then this approach fails.

So I have to schedule both jobs at exactly the same time without one invoking 
the other. I will do some more testing this week to see how this setup behaves 
in the regular backup schedule.

Another option would be to add a parameter to the Job resource, something like 
“Wait For Before Script To Finish = Yes / No”. This would make a cleaner 
configuration possible since I can call the dump script directly from the 
backup job without the admin job (my first approach). Is this an option for 
change?

regards
Frank


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kern Sibbald
Sent: Saturday, October 29, 2005 12:04 AM
To: bacula-users@lists.sourceforge.net
Cc: Martin Simmons; [EMAIL PROTECTED]
Subject: Re: [Bacula-users] mysqldump using fifo

On Friday 28 October 2005 18:38, Martin Simmons wrote:
> >>>>> On Fri, 28 Oct 2005 11:13:30 +0200, Kern Sibbald 
> >>>>> <[EMAIL PROTECTED]>
> >>>>> said:
>
>   Kern> On Friday 28 October 2005 10:59, Martin Simmons wrote:
>   >> >>>>> On Fri, 28 Oct 2005 08:28:31 +0700, "frank" <[EMAIL PROTECTED]>
>   >> >>>>> said:
>
>   frank> I cannot get MySQL fifo backups to work. If I run the
>   frank> backup job it hangs forever waiting for the input. If I
>   frank> run the job manually from the shell it works without
>   frank> problems. What is going on?
>
>   frank> ...
>
>   frank> #!/bin/sh
>
>   frank> database=mydb
>   frank> user=root
>   frank> password="***"
>   frank> fifo=/backup/fifo/MyDatabase
>
>   frank> case "$1" in
>   frank> before)
>   frank> echo Before backup job processing...
>   frank> rm -f $fifo 2>&1 < /dev/null
>   frank> mkfifo $fifo 2>&1 < /dev/null
>   frank> echo Dumping $database to fifo: $fifo
>   frank> mysqldump --user=$user --password=$password $database & >
>
>   >> $fifo 2>&1 < /dev/null frank>     echo Done.
>
>   frank> ;;
>   frank> after)
>   frank> echo After backup job processing...
>   frank> rm -f $fifo 2>&1 < /dev/null
>   frank> echo Done.
>   frank> ;;
>   frank> esac
>
>   >> Is the mysqldump process still there when the job is hanging?
>   >>
>   >> What happens if you remove 2>&1?  I don't see the point in redirecting
>   >> stderr to the fifo anyway, but also it will hide any error messages on
>   >> the tape :-)
>
>   Kern> In addition, I don't see any need to redirect /dev/null to the 
> input for rm, Kern> mkfifo, and mysqldump.
>
> It might be worthwhile for mysqldump, because that is backgrounded so 
> doesn't want to be connected to the stdin of the script?

Yes, this is a good point.  In fact, it might even be worth while to try:

mysqldump --user=$user --password=$password $database >$fifo 2>/dev/null 
</dev/null &

Another possible problem is that perhaps Bacula waits for the script to finish 
and in doing so also waits for all children of the script to terminate.  In 
this case, using an Admin job that that starts before the backup would resolve 
the problem.


>
> __Martin
>
>

-- 
Best regards,

Kern

  (">
  /\
  V_V




-------------------------------------------------------
This SF.Net email is sponsored by the JBoss Inc.
Get Certified Today * Register for a JBoss Training Course
Free Certification Exam for All Training Attendees Through End of 2005
Visit http://www.jboss.com/services/certification for more information
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to