I had actually responded to my own question after some learning and tinkering on Friday. Unfortunately, when I reply to list messages, they only get send to the original author... *sigh*
This is the content of the message I (unknowingly) sent only to myself: I decided that the best way would be to write a query that would do exactly that! Only I didn't know diddly about SQL. Note, that was past-tense. Now that I have a bit of knowledge about SQL, at least enough to write the following query, presented below as if added to the query.sql file bacula uses when you enter "query" in bconsole: # 17 :List Elapsed Backup Times for all jobs in the last 48 hours. SELECT DISTINCT Job.Name AS Job, Client.Name AS Client, StartTime, TIMEDIFF(EndTime,StartTime) AS Elapsed, EndTime FROM Client,Job WHERE TIMESTAMPDIFF(HOUR,EndTime,CURRENT_TIMESTAMP)<='48' and TIMESTAMPDIFF(HOUR,EndTime,CURRENT_TIMESTAMP)>'0' Sure, there's some refinements that I'll probably add, but the basic concept is there. As the description says, it only lists jobs for the last 48 hours. # 17 :List Elapsed Backup Times for all jobs in the last 48 hours. SELECT DISTINCT Job.Name AS Job, Client.Name AS Client, StartTime, TIMEDIFF(EndTime,StartTime) AS Elapsed, EndTime FROM Client,Job WHERE Job.ClientId=Client.ClientId TIMESTAMPDIFF(HOUR,EndTime,CURRENT_TIMESTAMP)<='48' and TIMESTAMPDIFF(HOUR,EndTime,CURRENT_TIMESTAMP)>'0' On Saturday 19 May 2007 5:23:50 pm Arno Lehmann wrote: > Hi, > > On 5/17/2007 4:04 PM, Flak Magnet wrote: > > What is the best method for determining the total time it took an already > > completed job to finish once it actually started? > > That's tricky... > > > Parsing the log file? > > At least, all the information you need is in there, I believe. > > > A series of commands from bconsole? > > At least I don't know which commands I'd use. > > > Bweb? > > I don't have enough experience with bweb to know. > > > I have multiple jobs "queuing" up at one time which wait for each other > > to run based on "max concurrent jobs = 1" jobs on the SD's. I'd like a > > quick way to figure out how long a job takes. > > Hmm... sounds like a feature request :-) > > (Not absolutely sure, though.) > > > I'm not worried about the "waiting on [whatever}" time, just the time > > between when the job actually starts reading/writing data on the FD/SD > > and when it completes. > > I'm not sure this information is readily available in the catalog. The > reports have the necessary, information. > > Parsing the report may be difficult... the ones I look at now all use > spooling, without spooling things look a little bit different. > > > 19-May 08:20 goblin-dir: Start Backup JobId 9971, > > Job=BeowulfStd.2007-05-19_08.20.01 > > ... > > > 19-May 08:26 goblin-sd: Spooling data ... > > The above would indicate when the actual backup starts. > > > 19-May 08:36 goblin-sd: Job write elapsed time = 00:09:31, Transfer rate > > = 1.346 M bytes/second > > Spooling is finished now. > > > 19-May 08:36 goblin-sd: Committing spooled data to Volume "DAT-120-0021". > > Despooling 769,858,302 bytes ... 19-May 08:47 goblin-sd: End of Volume > > "DAT-120-0021" at 11:9070 on device "HP DAT 0" (/dev/nst1). Write of > > 64512 bytes got -1. > > ... > > The lines above would be ignored when looking for the jobs time, I suppose. > > > 19-May 08:49 goblin-sd: New volume "DAT-120-0032" mounted on device "HP > > DAT 0" (/dev/nst1) at 19-May-2007 08:49. 19-May 08:53 goblin-sd: > > Despooling elapsed time = 00:14:52, Transfer rate = 863.0 K bytes/second > > Now the data is completely on tape. > > > 19-May 08:53 goblin-sd: Sending spooled attrs to the Director. Despooling > > 226,543 bytes ... > > Despooling takes place. > > > 19-May 08:53 beowulf-fd: ClientRunAfterJob: Deleting temporary files > > The job itself is finished. > > So, I'd grep for lines with "Start Backup JobId" to find the actual > start time, and "Sending spooled attrs" is the first report line after > the data is on tape. Using the time difference between those lines would > be a reasonable job time, IMO. > > The catalog has this information: > > mysql> select StartTime,EndTime from Job where JobId=9971; > +---------------------+---------------------+ > > | StartTime | EndTime | > > +---------------------+---------------------+ > > | 2007-05-19 08:26:01 | 2007-05-19 08:54:02 | > > +---------------------+---------------------+ > > which is about identical to the jo report times, it just includes the > attribute despooling time. > > So, once again, I'd use the catalog :-) > > For example, with my rather old MySQL version, I can do: > > mysql> select Name,sec_to_time(EndTime-StartTime) AS Runtime from Job > where JobId>9970; > +---------------+----------+ > > | Name | Runtime | > > +---------------+----------+ > > | BeowulfStd | 00:46:41 | > | Goblin | 00:05:51 | > | ElfSys | 02:20:13 | > | ElfSrv | 00:05:39 | > | ElfHome | 00:01:31 | > | Balrog | 00:03:50 | > | GoblinDB | 00:00:05 | > | BackupMail | 00:43:12 | > | BackupCatalog | 01:48:33 | > | Shutdown | 00:00:00 | > > +---------------+----------+ > 10 rows in set (0.00 sec) > > which looks quite fine... more recent MySQL versions or PostgreSQL > should have other functins to format the output. > > Not that the resulting times do include all the time the jobs wait for > free resources, like storage or database access once they were started. > > Arno > > > TIA. -- -- Flak Magnet (Tim) www.flakmagnet.com ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users