Thanks to all who have been offering suggestions. I have been reading them and will try to incorporate as much as possible.
I have already reworked that little brain-dead python script into something which uses a regular expression to pick off all of the data from each cost/timing line (including the first one), and tracks the heirarchy. I'll put all of these into the analysis database. Due to client operational problems I've been called in on, I haven't done much more than that so far. I'll try to firm up a proposed schema for the data soon, and some idea of what a test case definition will look like. Then I'll probably have to set it aside for two or three weeks. I'll attach the current plan scanner for review, comment, improvement. Also, someone may want to look at the results from a few queries to get ideas going on how they want to use the data. Regarding the idea of a site where results could be posted and loaded into a database which would be available for public access -- I agree that would be great; however, my client is not willing to take that on. If anyone wants to volunteer, that wuold be fantastic. -Kevin >>> "Jim C. Nasby" <[EMAIL PROTECTED]> >>> On Fri, Oct 14, 2005 at 03:34:43PM -0500, Kevin Grittner wrote: > of the two times as a reliability factor. Unfortunately, that > means doubling the number of cache flushes, which is likely > to be the most time-consuming part of running the tests. On > the bright side, we would capture the top level runtimes you > want. Actually, if you shut down the database and run this bit of code with a high enough number you should have a nicely cleaned cache. int main(int argc, char *argv[]) { if (!calloc(atoi(argv[1]), 1024*1024)) { printf("Error allocating memory.\n"); } } Running that on a dual Opteron (842's, I think) gives: [EMAIL PROTECTED]:35]~:10>time ./a.out 3300 3.142u 8.940s 0:40.62 29.7% 5+4302498k 0+0io 2pf+0w That was on http://stats.distributed.net and resulted in about 100MB being paged to disk. With 3000 it only took 20 seconds, but might not have cleared 100% of memory.
parse_plan.py
Description: Binary data
---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly