[email protected] (Anne & Lynn Wheeler) writes: > somebody in europe obtains the rights to a descendent of the > "performance predictor" in the early 90s (in the period that the company > had gone into the red, had been reorganized into the 13 "baby blues" in > preparation for breaking up the company) and had ran it through a APL to > C-language translator. I run into him last decade doing consulting work > at large mainframe financial datacenters (operations with 40+ maxed out > mainframes, billion dollar+, machines constantly being upgrade, none > older than 18m ... these operations account for major portion of annual > mainframe revenue). I had found 14% improvement in application that ran > every night on 40+ maxed out (MVS) mainframes (the number of machines > sized so the application finishes in the overnight batch window).
re: http://www.garlic.com/~lynn/2014b.html#81 CPU time trivia ... this particular application had couple decade history (at the time some 450k cobol statements) and a dedicated performance group of possibly 100 people. the issue was that they had gotten quite myopic in the techniques they used for looking at performance and throughput ... including lots of low-level hotspot. the science center in the 70s used numerous techniques, system modeling, hot-spot, multiple regression analysis, simulation, workload profiling, etc. turns out the 14% was in a macro characteristic that wasn't evident in the low-level hotspot micro-level (but represented a couple hundred million savings because of the large number of mainframes involved). -- virtualization experience starting Jan1968, online at home since Mar1970 ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
