Still struggling with why this feature isn't working for me, I have set
the following in bacula-dir.conf for the job in question:
Job {
Name = "TEST-001-BKP"
Type = Backup
JobDefs = DefaultJob
Client = test-fd
FileSet = "TEST-BackupFileSet"
Schedule = SCH_TESTING
Storage = BackupStorage
Messages = Standard
Pool = DefaultPool
Full Backup Pool = FullPool
Incremental Backup Pool = IncrementalPool
Differential Backup Pool = DifferentialPool
# Client Run After Job = "/bin/bacularm -rf /tmp/khushil/*"
RunScript {
RunsWhen = After
RunsOnSuccess = Yes
RunsOnFailure = No
RunsOnClient = Yes
Command = "/bin/rm -rf /tmp/khushil/*"
}
}
This is the output in the logs:
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: There are no more Jobs
associated with Volume "BKP_FULL_0060". Marking it purged.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: All records pruned from Volume
"BKP_FULL_0060"; marking it "Purged"
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Recycled volume
"BKP_FULL_0060"
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Using Device
"BackupStorageLocation"
06-Nov 12:16 MigWebSSCbkp1-sd JobId 295: Recycled volume "BKP_FULL_0060"
on device "BackupStorageLocation" (/webssc_bkp/bacula/backups), all
previous data lost.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Max Volume jobs exceeded.
Marking Volume "BKP_FULL_0060" as Used.
06-Nov 12:16 MigWebSSCbkp1-sd JobId 295: Job write elapsed time =
00:00:01, Transfer rate = 886 bytes/second
06-Nov 12:25 oracle-fd JobId 295: ClientAfterJob: run command "/bin/rm
-rf /tmp/khushil/*"
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Bacula MigWebSSCbkp1-dir 2.2.5
(09Oct07): 06-Nov-2007 12:16:49
Build OS: i686-pc-linux-gnu redhat Enterprise release
JobId: 295
Job: TEST-001-BKP.2007-11-06_12.16.03
Backup Level: Full
Client: "test-fd" 2.2.5 (09Oct07)
i686-pc-linux-gnu,redhat,Enterprise release
FileSet: "TEST-BackupFileSet" 2007-11-06 11:58:48
Pool: "FullPool" (From Job FullPool override)
Storage: "BackupStorage" (From Job resource)
Scheduled time: 06-Nov-2007 12:16:45
Start time: 06-Nov-2007 12:16:48
End time: 06-Nov-2007 12:16:49
Elapsed time: 1 sec
Priority: 10
FD Files Written: 2
SD Files Written: 2
FD Bytes Written: 698 (698 B)
SD Bytes Written: 886 (886 B)
Rate: 0.7 KB/s
Software Compression: None
VSS: no
Encryption: no
Volume name(s): BKP_FULL_0060
Volume Session Id: 1
Volume Session Time: 1194351392
Last Volume Bytes: 1,728 (1.728 KB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Begin pruning Jobs.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: No Jobs found to prune.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: Begin pruning Files.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: No Files found to prune.
06-Nov 12:16 MigWebSSCbkp1-dir JobId 295: End auto prune.
any ideas chaps?
________________________________
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dep,
Khushil (GE Money)
Sent: 06 November 2007 10:28
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] ClientRunAfterJob Problems
Hey all,
I've my jobs setup with the line ClientRunAfterJob = "rm -rf
/path/to/source/*" which as I understand it should issue that command on
the client after job runs? Looking at the job log on the director, it is
issuing the command and yet, it does not seem to be executing on the
client as the files are still present. Any ideas? The bacula process on
both the dir and fd are running as root.
Khush.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users