Kenny,

Again it looks like you haven't change the Where variable to a local server
directory and kept using: /nas/users/admin/
Could you please try any other directory? E. g.: /tmp?
NAS can be tricky and its a whole different ACL system. Trying to restore
files directly on a mount point may lead you to this error.

Regards and good luck,


On Mon, Sep 8, 2014 at 4:39 PM, Kenny Noe <knoe...@gmail.com> wrote:

> I've tried all day to restore this file.   Ran 12 restores today.   All
> have the same result.   I get an empty file.
>
> This worked once 3 months ago when I completed a restore of this file to
> the original server, but now that server is gone I can't restore the file!
>
> Sorry to vent my frustration but I've no one else to bounce ideas off.
>
> --Kenny
>
> Error:
> 08-Sep 15:03 BS01-DIR1 JobId 12902: Start Restore Job
> Restore_mail_bluewhale.2014-09-08_15.03.32_04
> 08-Sep 15:03 BS01-DIR1 JobId 12902: Using Device "File_bluewhale"
> 08-Sep 15:03 BS01-SD1 JobId 12902: Ready to read from volume "mail-0386"
> on device "File_bluewhale" (/nas/bacula/bluewhale).
> 08-Sep 15:03 BS01-SD1 JobId 12902: Forward spacing Volume "mail-0386" to
> file:block 0:219.
> 08-Sep 15:25 BS01-SD1 JobId 12902: End of Volume at file 28 on device
> "File_bluewhale" (/nas/bacula/bluewhale), Volume "mail-0386"
> 08-Sep 15:25 BS01-SD1 JobId 12902: End of all volumes.
> 08-Sep 15:04 BS01-FD1 JobId 12902: Error: create_file.c:292 Could not open
> /nas/users/admin/backups/data/backups/mail/fifo/mail.tar: ERR=Interrupted
> system call
> 08-Sep 15:25 BS01-DIR1 JobId 12902: Bacula BS01-DIR1 5.2.2 (26Nov11):
>   Build OS:               x86_64-unknown-linux-gnu ubuntu 11.10
>   JobId:                  12902
>   Job:                    Restore_mail_bluewhale.2014-09-08_15.03.32_04
>   Restore Client:         besc-bs01
>   Start time:             08-Sep-2014 15:03:34
>   End time:               08-Sep-2014 15:25:46
>   Files Expected:         1
>   Files Restored:         0
>   Bytes Restored:         0
>   Rate:                   0.0 KB/s
>   FD Errors:              0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:            Restore OK -- warning file count mismatch
>
> 08-Sep 15:25 BS01-DIR1 JobId 12902: Begin pruning Jobs older than 15 days .
> 08-Sep 15:25 BS01-DIR1 JobId 12902: No Jobs found to prune.
> 08-Sep 15:25 BS01-DIR1 JobId 12902: Begin pruning Files.
> 08-Sep 15:25 BS01-DIR1 JobId 12902: No Files found to prune.
> 08-Sep 15:25 BS01-DIR1 JobId 12902: End auto prune.
>
>
>
> On Mon, Sep 8, 2014 at 3:28 PM, Kenny Noe <knoe...@gmail.com> wrote:
>
>> Heitor,
>>
>> /nas/bacula/bluewhale is a path on besc-bs01.  This is where the Bacula
>> backup are put.  mail-0386 is the bacula volume I'm trying to restore.
>>  I'll try a local location to restore.
>>
>> Thanks    --Kenny
>>
>>
>> On Mon, Sep 8, 2014 at 3:24 PM, Heitor Faria <hei...@bacula.com.br>
>> wrote:
>>
>>> Kenny,
>>>
>>> Is this path (/nas/bacula/bluewhale) a path to a NAS mount point in 
>>> besc-bs01
>>> client? If yes, this might be the cause of the writing error.
>>> Did you try to restore to a local client directory? E. g.: /tmp?
>>>
>>>
>>> On Mon, Sep 8, 2014 at 4:12 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>
>>>> Sorry,  failed to include the list...
>>>>
>>>> ---Kenny
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Kenny Noe <knoe...@gmail.com>
>>>> Date: Mon, Sep 8, 2014 at 1:25 PM
>>>> Subject: Re: [Bacula-users] Restore from dead client
>>>> To: Dan Langille <d...@langille.org>
>>>>
>>>>
>>>> Gents....  Following instructions.
>>>>
>>>> Appreciate your time...
>>>>
>>>> --Kenny
>>>>
>>>> *************************************
>>>> Complete screen capture of restore....
>>>>
>>>> *restore
>>>>
>>>> First you select one or more JobIds that contain files
>>>> to be restored. You will be presented several methods
>>>> of specifying the JobIds. Then you will be allowed to
>>>> select which files from those JobIds are to be restored.
>>>>
>>>> To select the JobIds, you have the following choices:
>>>>      1: List last 20 Jobs run
>>>>      2: List Jobs where a given File is saved
>>>>      3: Enter list of comma separated JobIds to select
>>>>      4: Enter SQL list command
>>>>      5: Select the most recent backup for a client
>>>>      6: Select backup for a client before a specified time
>>>>      7: Enter a list of files to restore
>>>>      8: Enter a list of files to restore before a specified time
>>>>      9: Find the JobIds of the most recent backup for a client
>>>>     10: Find the JobIds for a backup for a client before a specified
>>>> time
>>>>     11: Enter a list of directories to restore for found JobIds
>>>>     12: Select full restore to a specified Job date
>>>>     13: Cancel
>>>> Select item:  (1-13): 3
>>>> Enter JobId(s), comma separated, to restore: 12510
>>>> You have selected the following JobId: 12510
>>>>
>>>> Building directory tree for JobId(s) 12510 ...
>>>> 1 files inserted into the tree.
>>>>
>>>> You are now entering file selection mode where you add (mark) and
>>>> remove (unmark) files to be restored. No files are initially added,
>>>> unless
>>>> you used the "all" keyword on the command line.
>>>> Enter "done" to leave this mode.
>>>>
>>>> cwd is: /
>>>> $ mark *
>>>> 1 file marked.
>>>> $ done
>>>> Bootstrap records written to /etc/bacula/BS01-DIR1.restore.4.bsr
>>>>
>>>> The job will require the following
>>>>    Volume(s)                 Storage(s)                SD Device(s)
>>>>
>>>> ===========================================================================
>>>>
>>>>     mail-0386                 Storage_bluewhale         File_bluewhale
>>>>
>>>> Volumes marked with "*" are online.
>>>>
>>>>
>>>> 1 file selected to be restored.
>>>>
>>>> The defined Restore Job resources are:
>>>>      1: Restore_os_asterisk
>>>>      2: Restore_os_besc-4dvapp
>>>>      3: Restore_db_besc-bs01
>>>>      4: Restore_os_besc-bs01
>>>>      5: Restore_os_besc-unixmgr01
>>>>      6: Restore_mail_bluewhale
>>>>      7: Restore_os_demo
>>>>      8: Restore_app_demo
>>>>      9: Restore_os_dev
>>>>     10: Restore_app_dev
>>>>     11: Restore_os_mako
>>>>     12: Restore_app_mako
>>>> Select Restore Job (1-12): 6
>>>> Defined Clients:
>>>>      1: asterisk
>>>>      2: besc-4dvapp
>>>>      3: besc-bs01
>>>>      4: besc-unixmgr01
>>>>      5: bluewhale
>>>>      6: demo
>>>>      7: dev
>>>>      8: mako
>>>>      9: qa
>>>>     10: qa2
>>>>     11: smart
>>>> Select the Client (1-11): 5
>>>> Run Restore job
>>>> JobName:         Restore_mail_bluewhale
>>>> Bootstrap:       /etc/bacula/BS01-DIR1.restore.4.bsr
>>>> Where:           *None*
>>>> Replace:         always
>>>> FileSet:         Full_mail_bluewhale
>>>> Backup Client:   bluewhale
>>>> Restore Client:  bluewhale
>>>> Storage:         Storage_bluewhale
>>>> When:            2014-09-08 13:22:56
>>>> Catalog:         BS01-Catalog
>>>> Priority:        10
>>>> Plugin Options:  *None*
>>>> OK to run? (yes/mod/no): mod
>>>> Parameters to modify:
>>>>      1: Level
>>>>      2: Storage
>>>>      3: Job
>>>>      4: FileSet
>>>>      5: Restore Client
>>>>      6: When
>>>>      7: Priority
>>>>      8: Bootstrap
>>>>      9: Where
>>>>     10: File Relocation
>>>>     11: Replace
>>>>     12: JobId
>>>>     13: Plugin Options
>>>> Select parameter to modify (1-13): 5
>>>> The defined Client resources are:
>>>>      1: asterisk
>>>>      2: besc-4dvapp
>>>>      3: besc-bs01
>>>>      4: besc-unixmgr01
>>>>      5: bluewhale
>>>>      6: demo
>>>>      7: dev
>>>>      8: mako
>>>> Select Client (File daemon) resource (1-8): 3
>>>> Run Restore job
>>>> JobName:         Restore_mail_bluewhale
>>>> Bootstrap:       /etc/bacula/BS01-DIR1.restore.4.bsr
>>>> Where:           *None*
>>>> Replace:         always
>>>> FileSet:         Full_mail_bluewhale
>>>> Backup Client:   bluewhale
>>>> Restore Client:  besc-bs01
>>>> Storage:         Storage_bluewhale
>>>> When:            2014-09-08 13:22:56
>>>> Catalog:         BS01-Catalog
>>>> Priority:        10
>>>> Plugin Options:  *None*
>>>> OK to run? (yes/mod/no): mod
>>>> Parameters to modify:
>>>>      1: Level
>>>>      2: Storage
>>>>      3: Job
>>>>      4: FileSet
>>>>      5: Restore Client
>>>>      6: When
>>>>      7: Priority
>>>>      8: Bootstrap
>>>>      9: Where
>>>>     10: File Relocation
>>>>     11: Replace
>>>>     12: JobId
>>>>     13: Plugin Options
>>>> Select parameter to modify (1-13): 9
>>>> Please enter path prefix for restore (/ for none):
>>>> /nas/users/admin/backups
>>>> Run Restore job
>>>> JobName:         Restore_mail_bluewhale
>>>> Bootstrap:       /etc/bacula/BS01-DIR1.restore.4.bsr
>>>> Where:           /nas/users/admin/backups
>>>> Replace:         always
>>>> FileSet:         Full_mail_bluewhale
>>>> Backup Client:   bluewhale
>>>> Restore Client:  besc-bs01
>>>> Storage:         Storage_bluewhale
>>>> When:            2014-09-08 13:22:56
>>>> Catalog:         BS01-Catalog
>>>> Priority:        10
>>>> Plugin Options:  *None*
>>>> OK to run? (yes/mod/no): yes
>>>> Job queued. JobId=12899
>>>> *
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Sep 8, 2014 at 1:16 PM, Dan Langille <d...@langille.org> wrote:
>>>>
>>>>> Kenny:
>>>>>
>>>>> Please run the restore job again and then copy/paste the entire
>>>>> session from bcsonole.  Thank you.
>>>>>
>>>>>
>>>>> On Sep 8, 2014, at 1:15 PM, Heitor Faria <hei...@bacula.com.br> wrote:
>>>>>
>>>>> Kenny,
>>>>>
>>>>> Ok. Sorry for the confusion about the bluewhale client. This is why we
>>>>> need your full restore command, to see exactly what's happening..
>>>>> You should not change your Backup Client at the restore, even if the
>>>>> restore is dead. You should change your Restore Client. It would be nice 
>>>>> to
>>>>> inform its version too.
>>>>>
>>>>> Regards,
>>>>>
>>>>> On Mon, Sep 8, 2014 at 2:11 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>>>
>>>>>> Heitor,
>>>>>>
>>>>>> Pardon my ignorance, but i don't follow your questions...
>>>>>>
>>>>>> My original client "bluewhale" experienced a terrible HD failure thus
>>>>>> the need to recover.   I have been changing the "client" and "where"
>>>>>> parameters when walking thru the restore procedure.  When the restore
>>>>>> process runs it creates the requested file "mail.tar" but there are zero
>>>>>> bytes.
>>>>>>
>>>>>> If you get me a bit more to go on I'll gladly share input / output
>>>>>> for the request.
>>>>>>
>>>>>> Thanks   --Kenny
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 8, 2014 at 12:57 PM, Heitor Faria <hei...@bacula.com.br>
>>>>>> wrote:
>>>>>>
>>>>>>> Kenny,
>>>>>>>
>>>>>>> 1. Could you reproduce the input you used at the restore submission?
>>>>>>> 2. Can you tell the client bluewhale version?
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> On Mon, Sep 8, 2014 at 1:48 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Heitor,
>>>>>>>>
>>>>>>>> Hi!  Thanks for the reply....   I'm utilizing a console and the
>>>>>>>> Bacula console and executing a "restore" command then walking thru the
>>>>>>>> prompts given.  What is this "run" command?  Yes, I believe I'm trying 
>>>>>>>> to
>>>>>>>> restore the same file, but it's not working.
>>>>>>>>
>>>>>>>> Thoughts?
>>>>>>>>
>>>>>>>> Thanks    --Kenny
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Sep 8, 2014 at 12:41 PM, Heitor Faria <hei...@bacula.com.br
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Kenny,
>>>>>>>>>
>>>>>>>>> First of all, are you using the "run" command to submit a prior
>>>>>>>>> configured Restore Job? I think this is not advisable, since there are
>>>>>>>>> several restore variables that only the "restore" command can fetch.
>>>>>>>>> Did you try to restore the same file with the restore command?
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Sep 8, 2014 at 1:28 PM, Kenny Noe <knoe...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Dan,
>>>>>>>>>>
>>>>>>>>>> Thanks for the reply.   I tried this this morning and still
>>>>>>>>>> failed the restore.   I see during the Status Storage - 
>>>>>>>>>> Storage_Bluewhale
>>>>>>>>>> where the "Running Jobs section shows Files=0, Bytes=0 Bytes/sec=0.
>>>>>>>>>>  However in the Device Status section, Device "File_bluewhale" is 
>>>>>>>>>> mounted
>>>>>>>>>> and the Total Bytes Read and Block Read go up....   Now with the 
>>>>>>>>>> simplefied
>>>>>>>>>> config It seems to have lost it's Pool.
>>>>>>>>>>
>>>>>>>>>> I'm confused...  I changed the where to restore to
>>>>>>>>>> /nas/users/admin/backups and changed in the client config to
>>>>>>>>>> remove the fifo headache...  But it still is trying to use it...
>>>>>>>>>>
>>>>>>>>>> Here is the error from the log :
>>>>>>>>>> 08-Sep 11:51 BS01-DIR1 JobId 12897: Start Restore Job
>>>>>>>>>> Restore_mail_bluewhale.2014-09-08_11.51.55_03
>>>>>>>>>> 08-Sep 11:51 BS01-DIR1 JobId 12897: Using Device "File_bluewhale"
>>>>>>>>>> 08-Sep 11:51 BS01-SD1 JobId 12897: Ready to read from volume
>>>>>>>>>> "mail-0386" on device "File_bluewhale" (/nas/bacula/bluewhale).
>>>>>>>>>> 08-Sep 11:51 BS01-SD1 JobId 12897: Forward spacing Volume
>>>>>>>>>> "mail-0386" to file:block 0:219.
>>>>>>>>>> 08-Sep 12:14 BS01-SD1 JobId 12897: End of Volume at file 28 on
>>>>>>>>>> device "File_bluewhale" (/nas/bacula/bluewhale), Volume "mail-0386"
>>>>>>>>>> 08-Sep 12:14 BS01-SD1 JobId 12897: End of all volumes.
>>>>>>>>>> 08-Sep 11:52 BS01-FD1 JobId 12897: Error: create_file.c:292 Could
>>>>>>>>>> not open /nas/users/admin/backups/data/backups/mail/fifo/mail.tar:
>>>>>>>>>> ERR=Interrupted system call
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Bacula BS01-DIR1 5.2.2
>>>>>>>>>> (26Nov11):
>>>>>>>>>>   Build OS:               x86_64-unknown-linux-gnu ubuntu 11.10
>>>>>>>>>>   JobId:                  12897
>>>>>>>>>>   Job:
>>>>>>>>>>  Restore_mail_bluewhale.2014-09-08_11.51.55_03
>>>>>>>>>>   Restore Client:         besc-bs01
>>>>>>>>>>   Start time:             08-Sep-2014 11:51:57
>>>>>>>>>>   End time:               08-Sep-2014 12:14:09
>>>>>>>>>>   Files Expected:         1
>>>>>>>>>>   Files Restored:         0
>>>>>>>>>>   Bytes Restored:         0
>>>>>>>>>>   Rate:                   0.0 KB/s
>>>>>>>>>>   FD Errors:              0
>>>>>>>>>>   FD termination status:  OK
>>>>>>>>>>   SD termination status:  OK
>>>>>>>>>>   Termination:            Restore OK -- warning file count
>>>>>>>>>> mismatch
>>>>>>>>>>
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Begin pruning Jobs older than
>>>>>>>>>> 15 days .
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: No Jobs found to prune.
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Begin pruning Files.
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: No Files found to prune.
>>>>>>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: End auto prune.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> What is a file count mismatch?
>>>>>>>>>>
>>>>>>>>>> Here is the status during a restore :
>>>>>>>>>> Connecting to Storage daemon Storage_bluewhale at
>>>>>>>>>> 10.10.10.199:9103
>>>>>>>>>>
>>>>>>>>>> BS01-SD1 Version: 5.2.2 (26 November 2011)
>>>>>>>>>> x86_64-unknown-linux-gnu ubuntu 11.10
>>>>>>>>>> Daemon started 08-Sep-14 11:48. Jobs: run=0, running=0.
>>>>>>>>>>  Heap: heap=598,016 smbytes=386,922 max_bytes=405,712 bufs=947
>>>>>>>>>> max_bufs=949
>>>>>>>>>>  Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0
>>>>>>>>>>
>>>>>>>>>> Running Jobs:
>>>>>>>>>> Reading: Full Restore job Restore_mail_bluewhale JobId=12897
>>>>>>>>>> Volume="mail-0386"
>>>>>>>>>>     pool="Pool_mail_bluewhale" device="File_bluewhale"
>>>>>>>>>> (/nas/bacula/bluewhale)
>>>>>>>>>>     Files=0 Bytes=0 Bytes/sec=0
>>>>>>>>>>     FDReadSeqNo=6 in_msg=6 out_msg=84529 fd=6
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>> Jobs waiting to reserve a drive:
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>> Terminated Jobs:
>>>>>>>>>>  JobId  Level    Files      Bytes   Status   Finished        Name
>>>>>>>>>>
>>>>>>>>>> ===================================================================
>>>>>>>>>>  12889  Incr         31    67.94 M  OK       08-Sep-14 00:01
>>>>>>>>>> Backup_os_besc-unixmgr01
>>>>>>>>>>  12891  Full          4    501.0 M  OK       08-Sep-14 00:05
>>>>>>>>>> Backup_app_dev
>>>>>>>>>>  12888  Incr        437    1.158 G  OK       08-Sep-14 00:06
>>>>>>>>>> Backup_os_besc-bs01
>>>>>>>>>>  12890  Incr          0         0   Other    08-Sep-14 00:30
>>>>>>>>>> Backup_os_bluewhale
>>>>>>>>>>  12893  Full          0         0   Other    08-Sep-14 01:30
>>>>>>>>>> Backup_mail_bluewhale
>>>>>>>>>>  12884  Full   2,361,101    154.6 G  OK       08-Sep-14 04:46
>>>>>>>>>> Backup_os_mako
>>>>>>>>>>  12892  Full          4    54.40 G  OK       08-Sep-14 05:56
>>>>>>>>>> Backup_app_mako
>>>>>>>>>>  12894                0         0   OK       08-Sep-14 08:53
>>>>>>>>>> Restore_mail_bluewhale
>>>>>>>>>>  12895                0         0   OK       08-Sep-14 09:28
>>>>>>>>>> Restore_mail_bluewhale
>>>>>>>>>>  12896                0         0   OK       08-Sep-14 10:10
>>>>>>>>>> Restore_mail_bluewhale
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>> Device status:
>>>>>>>>>> Device "File_asterisk" (/nas/bacula/asterisk) is not open.
>>>>>>>>>> Device "File_besc-4dvapp" (/nas/bacula/besc-4dvapp) is not open.
>>>>>>>>>> Device "File_besc-bs01" (/nas/bacula/besc-bs01) is not open.
>>>>>>>>>> Device "File_besc-unixmgr01" (/nas/bacula/besc-unixmgr01) is not
>>>>>>>>>> open.
>>>>>>>>>> Device "File_bluewhale" (/nas/bacula/bluewhale) is mounted with:
>>>>>>>>>>     Volume:      mail-0386
>>>>>>>>>>     Pool:        *unknown*
>>>>>>>>>>     Media type:  NAS_bluewhale
>>>>>>>>>>     Total Bytes Read=1,121,412,096 Blocks Read=17,383
>>>>>>>>>> Bytes/block=64,512
>>>>>>>>>>     Positioned at File=0 Block=1,121,412,275
>>>>>>>>>> Device "File_demo" (/nas/bacula/demo) is not open.
>>>>>>>>>> Device "File_dev" (/nas/bacula/dev) is not open.
>>>>>>>>>> Device "File_mako" (/nas/bacula/mako) is not open.
>>>>>>>>>> Device "File_qa" (/nas/bacula/qa) is not open.
>>>>>>>>>> Device "File_qa2" (/nas/bacula/qa2) is not open.
>>>>>>>>>> Device "File_smart" (/nas/bacula/smart) is not open.
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>> Used Volume status:
>>>>>>>>>> mail-0386 on device "File_bluewhale" (/nas/bacula/bluewhale)
>>>>>>>>>>     Reader=1 writers=0 devres=0 volinuse=1
>>>>>>>>>> mail-0386 read volume JobId=12897
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>> ====
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Here is my simplified client config:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> #********************************************************************************
>>>>>>>>>> # bluewhale
>>>>>>>>>>
>>>>>>>>>> #********************************************************************************
>>>>>>>>>>    Client {
>>>>>>>>>>       Name                   = bluewhale
>>>>>>>>>>       Address                = bluewhale.bnesystems.com
>>>>>>>>>>       Catalog                = BS01-Catalog
>>>>>>>>>>       Password               = "xxxxxxxxx"
>>>>>>>>>>       FileRetention          = 365 days
>>>>>>>>>>       JobRetention           = 365 days
>>>>>>>>>>       AutoPrune              = yes
>>>>>>>>>>       MaximumConcurrentJobs  = 1
>>>>>>>>>>    }
>>>>>>>>>>    Job {
>>>>>>>>>>       Name                   = Restore_mail_bluewhale
>>>>>>>>>>       FileSet                = Full_mail_bluewhale
>>>>>>>>>>       Type                   = Restore
>>>>>>>>>>       Pool                   = Pool_mail_bluewhale
>>>>>>>>>>       Client                 = bluewhale
>>>>>>>>>>       Messages               = Standard
>>>>>>>>>>    }
>>>>>>>>>>    Pool {
>>>>>>>>>>       Name                   = Pool_mail_bluewhale
>>>>>>>>>>       PoolType               = Backup
>>>>>>>>>>       Storage                = Storage_bluewhale
>>>>>>>>>>       MaximumVolumeJobs      = 1
>>>>>>>>>>       CatalogFiles           = yes
>>>>>>>>>>       AutoPrune              = yes
>>>>>>>>>>       VolumeRetention        = 365 days
>>>>>>>>>>       Recycle                = yes
>>>>>>>>>>       LabelFormat            = "mail-"
>>>>>>>>>>    }
>>>>>>>>>>    Storage {
>>>>>>>>>>       Name                   = Storage_bluewhale
>>>>>>>>>>       Address                = 10.10.10.199
>>>>>>>>>>       SDPort                 = 9103
>>>>>>>>>>       Password               = "xxxxxxx"
>>>>>>>>>>       Device                 = File_bluewhale
>>>>>>>>>>       MediaType              = NAS_bluewhale
>>>>>>>>>>       MaximumConcurrentJobs  = 1
>>>>>>>>>>    }
>>>>>>>>>>    FileSet {
>>>>>>>>>>       Name = Full_mail_bluewhale
>>>>>>>>>>       Include {
>>>>>>>>>>          Options {
>>>>>>>>>>             signature=SHA1
>>>>>>>>>> #            readfifo=yes
>>>>>>>>>>          }
>>>>>>>>>>          File="/mail.tar"
>>>>>>>>>>       }
>>>>>>>>>>    }
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks for the help.   I appreciate all the input.
>>>>>>>>>>
>>>>>>>>>> --Kenny
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Sep 7, 2014 at 8:22 AM, Dan Langille <d...@langille.org>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I suggest removing the before & after scripts.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Dan Langille
>>>>>>>>>>> http://langille.org/
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> > On Sep 6, 2014, at 8:38 PM, Kenny Noe <knoe...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>> >
>>>>>>>>>>> > Dan,
>>>>>>>>>>> >
>>>>>>>>>>> > Appreciate the reply....   Yes this is exactly what I want to
>>>>>>>>>>> do.
>>>>>>>>>>> > However when I try to just do a "simple" restore, the job
>>>>>>>>>>> finishes
>>>>>>>>>>> > with the error previously given.
>>>>>>>>>>> >
>>>>>>>>>>> > Any suggestions to do this would be appreciated.
>>>>>>>>>>> >
>>>>>>>>>>> > Thanks    --Kenny
>>>>>>>>>>> >
>>>>>>>>>>> >> On Sat, Sep 6, 2014 at 5:51 PM, Dan Langille <
>>>>>>>>>>> d...@langille.org> wrote:
>>>>>>>>>>> >>
>>>>>>>>>>> >> On Sep 5, 2014, at 5:48 PM, Kenny Noe <knoe...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>> >>
>>>>>>>>>>> >> Birre,
>>>>>>>>>>> >>
>>>>>>>>>>> >> Thanks for the reply.   I guess this is where I get lost...
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> The fifo is reading a file that was created in the
>>>>>>>>>>> pre-process called
>>>>>>>>>>> >> mail.tar.  The mail.tar is made from the following
>>>>>>>>>>> directories /opt/zimbra
>>>>>>>>>>> >> and /var/mail/zimbra.  This is where the Zimbra files and
>>>>>>>>>>> mailstore were
>>>>>>>>>>> >> kept.
>>>>>>>>>>> >>
>>>>>>>>>>> >> This pre-process is a script that has this :
>>>>>>>>>>> >>
>>>>>>>>>>> >> MailBackup.bash
>>>>>>>>>>> >> #!/bin/bash
>>>>>>>>>>> >>
>>>>>>>>>>> >> exec >/dev/null
>>>>>>>>>>> >>
>>>>>>>>>>> >> MKDIR="/bin/mkdir"
>>>>>>>>>>> >> MKFIFO="/usr/bin/mkfifo"
>>>>>>>>>>> >> RM="/bin/rm"
>>>>>>>>>>> >> TAR="/bin/tar"
>>>>>>>>>>> >>
>>>>>>>>>>> >> DEFCODE=0
>>>>>>>>>>> >> DUMPBASE="/data/backups"
>>>>>>>>>>> >>
>>>>>>>>>>> >> errCode=${DEFCODE}
>>>>>>>>>>> >> mailDir="/var/mail/zimbra"
>>>>>>>>>>> >> zimbraDir="/opt/zimbra"
>>>>>>>>>>> >>
>>>>>>>>>>> >> Main()
>>>>>>>>>>> >>   {
>>>>>>>>>>> >>   service zimbra stop
>>>>>>>>>>> >>
>>>>>>>>>>> >>   RunMailRestore
>>>>>>>>>>> >>
>>>>>>>>>>> >>   service zimbra start
>>>>>>>>>>> >>
>>>>>>>>>>> >>   ExitScript ${errCode}
>>>>>>>>>>> >>   }
>>>>>>>>>>> >>
>>>>>>>>>>> >> RunMailRestore()
>>>>>>>>>>> >>   {
>>>>>>>>>>> >>   EXTENSION=".tar"
>>>>>>>>>>> >>
>>>>>>>>>>> >>   dumpDir="${DUMPBASE}/mail"
>>>>>>>>>>> >>   fifoDir="${dumpDir}/fifo"
>>>>>>>>>>> >>
>>>>>>>>>>> >>   RebuildFifoDir
>>>>>>>>>>> >>
>>>>>>>>>>> >>   ${MKFIFO} ${fifoDir}/mail${EXTENSION}
>>>>>>>>>>> >>
>>>>>>>>>>> >>   ${TAR} -xpf ${fifoDir}/mail${EXTENSION} 2>&1 </dev/null &
>>>>>>>>>>> >>   }
>>>>>>>>>>> >>
>>>>>>>>>>> >> RebuildFifoDir()
>>>>>>>>>>> >>   {
>>>>>>>>>>> >>   if [ -d ${fifoDir} ]
>>>>>>>>>>> >>   then
>>>>>>>>>>> >>      ${RM} -rf ${fifoDir}
>>>>>>>>>>> >>   fi
>>>>>>>>>>> >>
>>>>>>>>>>> >>   ${MKDIR} -p ${fifoDir}
>>>>>>>>>>> >>   }
>>>>>>>>>>> >>
>>>>>>>>>>> >> ExitScript()
>>>>>>>>>>> >>   {
>>>>>>>>>>> >>   exit ${1}
>>>>>>>>>>> >>   }
>>>>>>>>>>> >>
>>>>>>>>>>> >> Main
>>>>>>>>>>> >>
>>>>>>>>>>> >> The restore script simply does a tar xpf instead of a tar cpf.
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> Perhaps instead of doing that, just restore the data, and
>>>>>>>>>>> then do the tar
>>>>>>>>>>> >> xpf later.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>>>> Want excitement?
>>>>>>>>>> Manually upgrade your production database.
>>>>>>>>>> When you want reliability, choose Perforce
>>>>>>>>>> Perforce version control. Predictably reliable.
>>>>>>>>>>
>>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Bacula-users mailing list
>>>>>>>>>> Bacula-users@lists.sourceforge.net
>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> ============================================
>>>>>>>>> Heitor Medrado de Faria | Need Bacula training? 10% discount
>>>>>>>>> coupon code at Udemy: bacula-users
>>>>>>>>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>>>>>>>>> +55 61 2021-8260
>>>>>>>>> +55 61 8268-4220
>>>>>>>>> Site: www.bacula.com.br
>>>>>>>>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>>>>>>>>> Gtalk: heitorfa...@gmail.com
>>>>>>>>> ============================================
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> ============================================
>>>>>>> Heitor Medrado de Faria | Need Bacula training? 10% discount coupon
>>>>>>> code at Udemy: bacula-users
>>>>>>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>>>>>>> +55 61 2021-8260
>>>>>>> +55 61 8268-4220
>>>>>>> Site: www.bacula.com.br
>>>>>>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>>>>>>> Gtalk: heitorfa...@gmail.com
>>>>>>> ============================================
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> ============================================
>>>>> Heitor Medrado de Faria | Need Bacula training? 10% discount coupon
>>>>> code at Udemy: bacula-users
>>>>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>>>>> +55 61 2021-8260
>>>>> +55 61 8268-4220
>>>>> Site: www.bacula.com.br
>>>>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>>>>> Gtalk: heitorfa...@gmail.com
>>>>> ============================================
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Want excitement?
>>>> Manually upgrade your production database.
>>>> When you want reliability, choose Perforce
>>>> Perforce version control. Predictably reliable.
>>>>
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>>>> _______________________________________________
>>>> Bacula-users mailing list
>>>> Bacula-users@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>>
>>>>
>>>
>>>
>>> --
>>> ============================================
>>> Heitor Medrado de Faria | Need Bacula training? 10% discount coupon
>>> code at Udemy: bacula-users
>>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>>> +55 61 2021-8260
>>> +55 61 8268-4220
>>> Site: www.bacula.com.br
>>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>>> Gtalk: heitorfa...@gmail.com
>>> ============================================
>>>
>>
>>
>


-- 
============================================
Heitor Medrado de Faria | Need Bacula training? 10% discount coupon code at
Udemy: bacula-users
<https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
+55 61 2021-8260
+55 61 8268-4220
Site: www.bacula.com.br
Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
Gtalk: heitorfa...@gmail.com
============================================
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to