Below text was posted on MXG-L recently. It made me curious, so I tired to read 
the APARs mentioned. Unfortunately, IBM's support site does not have them (for 
public access) anymore.
Can someone shed some light on this? What was the original problem? Why did it 
increase CPU time for many STCs *over time*? What path length was increased. 
Just curious.
Peter



Posted on MXG-L
My 2003 Newsletter has this note:

29. APAR OW54622 introduced an SQA overflow into CSA condition that
    increased CPU time for many STCs over time; the new GETMAIN larger
    than FREEMAIN was corrected by APAR OW55360.  It has long been known
    that when SQA is too small and expands into the CSA area, path
    lengths are dramatically increased; you can detect this condition in
    MXG dataset TYPE78VS variables SQAEXPNx.


Unfortunately, I do NOT know if that statement with regard to increased
CPU time due to path length when there is an overflow is still true, and
I can't find the "long known" source.


--
Peter Hunkeler



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to