I don't know what is being "assumed" or "understood", but most of this 
thread is somewhat technically inaccurate. Some of that is perhaps 
terminology, some not.

So let's start here:

>VLF does not do caching.
In my view, this is at least misleading (although perhaps it's simply 
being loose with terminology). VLF's only job is to manage a cache. I view 
that as doing "caching". What VLF doesn't do is to decide what to put into 
the cache. That is left to each exploiter. All VLF exploiters do this. 
They identify an "object" and VLF caches the object (for later retrieval 
by the exploiter). An exploiter may choose to tell VLF to remove an object 
from the cache. LLA does this removal only when it has figured out the 
deletion of a module in a data set it is managing. 

>VLF does not start doing caching until ...
Again terminology. VLF never "starts" caching, it is told to cache. Or, 
perhaps you'd say that it starts caching when it is told to cache 
something. The sentiment expressed here, correctly, would probably have 
been "LLA does not ask VLF to cache a module until".

>Correct, that is why you also see Deletes in the LLA VLF 
>cache. If LLA has a module that qualifies for the cache, 
>it can delete a module that does not qualify anymore to 
>make room for the new one. Therefore you will see deletes 
>and adds of modules caused by LLA management of the cache, 
>and hardly any trims caused by VLF management of the cache.
>The other exploiters push 'everything' into the cache and 
>will find out later if the object is still there. VLF 
>hitratio and trim statistics are indeed useful here.

It is dangerous to make very direct statements like this without detailed 
knowledge of the internals. Unfortunately, the statement is not true. LLA 
does not "delete a module that does not qualify anymore" (as I mentioned 
early, the only specific deletion is for a deleted member).  Regardless, 
VLF does trim the LLA class, just as it trims any other class when it gets 
(to VLF) too full. VLF has no code that does anything specific for LLA -- 
it doesn't know about LLA, although perhaps (I have no idea) LLA is the 
only exploiter that saves its VLF data in two pieces (for LLA, those two 
pieces are the module text and the relocation information)

>The exception being when you cross the 90% utilization mark... 
Close. The trimming threshold for VLF happens to be 95%.

>Then trim is forced, so that the now eligible requested module 
>can be put into cache. So those modules eligible for trim get 
>marked (NOTE, that is MARKED) for deletion. If one of those 
>modules gets requested before the "delete" cycle hits, the delete 
>flag is turned off.

This led me to think you were thinking that this processing is 
synchronous. It isn't. Once the trimming threshold has been reached, VLF 
marks objects for potential deletion.
If the trimming has not yet occurred, then a new request will still be 
rejected for "out of space". It is true that if, in between the "marking" 
and the actual "deleting", VLF gets a request to retrieve the marked 
object, it will change its mind and not delete it (because the object is 
no longer least-recently-used).

>The '5 fetches' algorithm
It happens to be "10" (unless the CSVLLIX1 exit indicates otherwise). But 
reaching that number for any module is an "event" that kicks off staging 
analysis for just about everything (including things that have not been 
fetched that many times). 

>This indicates to me that VLF is very much involved in the 
>control of cache. If the weight assigned to the module, as you 
>pass through CSVLLIX2, prohibits caching, I believe it is VLF 
>that doesn't bother. 
It is LLA that doesn't bother, not VLF.

>After all, the trim code apparently is a VLF 
>module (I'm sorry, I can't remember if it is COFTRIM or VLFTRIM, 
>I only remember that TRIM is part of the name that STROBE 
>captured when we saw a COBOL program spending an inordinate 
>amount of time in "LOAD/LINK" "functions").
The VLF trimming module is COFMTRIM. And in many cases where MAXVIRT for a 
VLF class is too small, that module may do a lot of work. 

>If my understanding is incorrect, I really would like to know -- 
>because it means that I have greatly misinterpreted the stuff 
>I've read in various published manuals and other information 
>passed to me in trying to diagnose what I believe to have been 
>caused by too small of MAXVIRT for CSVLLA.
I suspect that overall your understanding is correct, but the specifics 
behind it might not be.

OK, here we go. I won't swear to all of this, but it's pretty likely to be 
correct.

LLA manages PDS(E) directories and modules. 
The directories (interacting with BLDL and DESERV) are kept in LLA private 
storage, along with all the rest of LLA's control data.
When LLA determines that a module should be cached (and determines that 
LLA itself is capable of caching it), it asks VLF to do the caching 
(COFCREAT).
The determination of caching involves many factors, including how often 
the module is fetched, how big it is, how much contention there appears to 
be on the device where the data set lives), and weights and a value 
identified by the CSVLLIX1 exit, all contributing to LLA's view of the 
value to the system of caching the module.

On a fetch request, LLA is queried "have you stashed it in VLF" and if the 
answer is "yes", the fetch uses COFRETRI to get the data. If VLF says 
"sorry, I don't have it", then the LLA control blocks are updated so that 
the next time it will answer "no". Trimming is the normal reason for this 
occurrence.

Upon asking VLF to cache a module, LLA remembers that it asked and if the 
request was successful.
If LLA finds that a module that it had successfully gotten cached no 
longer is deemed worthwhile, it does not tell VLF. It simply marks its own 
control structures so that a subsequent fetch of that module will not try 
retrieving it from VLF.  From that point on, there won't be any case for 
that module of "fetch but not cached by VLF" and it is quite likely that 
eventually VLF will trim that module. So perhaps there won't be much "did 
not find it" but there should be some "trims". I will admit that I have 
more or less never looked at the SMF data for the LLA class.

When the cache "fills up", LLA does nothing to remedy that. It relies on 
VLF trimming. LLA continue trying to cache things, and pays attention to 
whether trimming is taking place or not. If LLA has encountered trimming, 
then LLA will be less aggressive in trying to cache (i.e., it takes a 
higher value to warrant the attmpe). If LLA has not encountered  trimming, 
 LLA will be more aggressive in trying to cache. 

I referred earlier to "LLA ...capable of caching it". Program objects with 
multiple immediate-load segments (RMODE=SPLIT) and program objects with a 
deferred segment (C writeable static) are not cached by LLA in VLF. They 
are cached by PDSE processing itself. Normal program object fetch 
processing  which uses DIV results in getting the data from the PDSE cache 
(that still being a lot faster than getting it from the data set).

Peter Relson
z/OS Core Technology Design

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to