Many moons ago In a galaxy far away I learned to sort my input into sequence 
and move to Region=0M and add a large BUFNI and a small BUFND to the VSAM file 
DD then watch the SRB time drop to negligible 


Sent from my iPhone

No one said I could type with one thumb 

> On Jan 19, 2025, at 08:45, Joel Ewing 
> <0000070400eb8eab-dmarc-requ...@listserv.ua.edu> wrote:
> 
> It's incredible how much CPU (and clock time) can be expended if access to 
> file is doing "unnecessary" IO operations.   One  would hope these days there 
> aren't still any large, poor-blocksize sequential files around, but if 
> default buffering is used on such a file this can cost a lot of Operating 
> System resources to initiate many reads per logical track even if emulated 
> DASD is caching tracks to minimize physical reads to some media
> 
> VSAM file performance can vary even more widely  because the number of blocks 
> read can literally vary by three or more orders of magnitude depending on 
> access pattern and buffer tuning.  A badly tuned, random accessed VSAM file 
> may be required to re-read multiple Index CI blocks and a Data CI block for 
> each record accessed.   This can result in millions of Index block reads, 
> even if  there are only a 100 Index CIs.   I have seen cases where proper 
> tuning of a high-usage VSAM with better buffering or BLSR cut the clock run 
> time and CPU usage of a batch job by a factor of 10 or more.
> 
> So, yes, this makes sense.  An older shop that hasn't taken the time to 
> revisit old job streams and tune file access that was designed decades ago 
> when real memory was constrained and expensive should definitely re-tune.   
> It may now be practical to buffer all Index CI's in memory, or in some cases 
> it even makes sense to change a heavily used VSAM file to a in-memory data 
> structure.
> 
>     JC Ewing
> 
>> On 1/19/25 9:06 AM, Robert Prins wrote:
>> From LinkedIn:
>> 
>> <quote>
>> 2 weeks ago I received the analysis data from a new client that wanted to
>> reduce their CPU consumption and improve their performance. They sent me
>> the statistical data from their z16 10 LPARS. Information about 89,000+
>> files. I analyzed their data and found 2,000+ files *that could be improved*
>> and would save CPU when improved. *I pulled out 1 file to demonstrate a
>> Proof of Concept (POC) for the client. I had the client run the POC and it
>> showed a 29% reduction in CPU every time that file will be used. The 29%
>> did not include 3 other major adjustments that would save an addition 14%
>> CPU and cut the I/O by 75%.* This is just 1 file. The other files can save
>> 3% to 52% of their CPU every time they are used in BATCH or ONLINE.
>> </quote>
>> 
>> I've been a programmer on IBM since 1985, and the above doesn't make any
>> sense to me, how can changing just one file result in a 43% reduction in
>> CPU usage?
>> 
>> I've only ever been using PL/I, and using that I did manage to make some
>> improvements to code, including reducing the CPU usage of a CRC routine by
>> an even larger amount, 99.7% (Yes, ninety-nine-point-seven percent), but
>> that was because the old V2.3.0 PL/I Optimizing compiler was absolute shite
>> at handling unaligned bit-strings, but WTH can you change about a file to
>> get the above reduction in CPU?
>> 
>> Robert
> 
> --
> Joel C Ewing
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to