-----Original Message----- From: IBM Mainframe Discussion List [mailto:[email protected]] On Behalf Of Steve Thompson Sent: 26 November, 2014 21:08 To: [email protected] Subject: Re: VLF caching
On 11/26/2014 11:19 AM, Bob Shannon wrote: <snippage> > I tend to agree with the OP. LLA will only check VLF for modules it > previously cached. Granted there may be some trimming, but I don't ever > recall seeing less than a 100% hit ratio for LLA. The other VLF exploiters > behave differently and the SMF statistics for them tend to be helpful. The > CSVLLIX1 exit is required for accurate LLA fetch statistics. > > Bob Shannon > Rocket Software <SNIPPAGE> Wouldn't the LLA hit rate be based on it having the directory information as opposed to having to go read it (something about FREEZE vs. NOFREEZE ?)? Then, wouldn't the VLF data be based on an attempt to fetch, when it doesn't have it, so that you have a "cache" miss? After all, VLF does not start doing caching until there has been a module that meets the requirements (what, 5 fetches inside of x seconds?). Then there is a second cache trigger. And that is some number of hits on a library with some number of modules already cached, VLF then starts caching any module requested... [Sorry, a bit hazy on the exact numbers -- did not commit them to memory.] And I can see behavior that "backs this" when manually monitoring using MFM. The exception being when you cross the 90% utilization mark... Then trim is forced, so that the now eligible requested module can be put into cache. So those modules eligible for trim get marked (NOTE, that is MARKED) for deletion. If one of those modules gets requested before the "delete" cycle hits, the delete flag is turned off. More to tuning this sucker than I really wanted to get into. Hence my prior comment about a certain ISV and their products that pre-date LLA (Library Look Aside, not sure about Link List Lookaside) and VLF. Regards, Steve Thompson -------- No, here I read a common misconception about LLA and VLF working. LLA module caching and directory freeze are separate functions. Directories are kept completely in LLA's private storage. Modules are cached in VLF. LLA fetches only modules from the VLF cache if it knows it is still there, hence the 100% VLF hitratio. VLF does not do caching, VLF exploiters cache objects into the VLF cache (LLA, TSO clist, Catalog etc.). The '5 fetches' algorithme, together with some complex calculations about memory use, cache efficiency etc. are done by LLA, to determine if a module is going to be staged to VLF. Kees. ******************************************************** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ******************************************************** ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
