>So if I have 5 3390-27 locals and they are all equally used at 50%, the
>algorithms (CPU usage, not I/O) are going to pick one of them, then do the
>page outs.  That paging will find contiguous slots and should be efficient.
>
>BTW, this is just an example, we still try to keep our 3390-27 local usage
>at 30% just like we always did with smaller local page datasets in
>the past.
>
>I wonder what if any studies on this have been done in the lab.
>It would be nice if an IBM performance expert  like Kathy Walsh
>could weigh in.

I had the 'honour' of deleting and adding several local page data sets on 
several lpars. They were a mixture of 1.10 and 1.12, I think. What I did 
observe (and that clashed with what I thought I knew about ASM) is the 
following:

1) Adding one or more locals, I expected them to first fill up to about the 
same percentage as the ones that were already in use (same size page ds, much 
faster -new- controller). No such luck. It looked to me like *all* of them were 
filling up (percentage wise) in about the same manner. Meaning that the 'old' 
locals had about 27%, the new ones right after add 0%. A day later the old ones 
had 35%, the new ones 8%. About the same behaviour when adding locals of the 
same size on the same controller - we only have one DASD controller per 
sysplex, and having two was the time when we migrated from one to the other.

2) A pagedel finishes MUCH faster than it ever did. It looked like ASM is 
actively shifting slots away from the to-be-deleted page data set. A pagedel 
finishes in under a minute. This used to be a really slow process because 
nothing was actively done.

3) In one case, I had two locals and pagedel'd one of them. Usage of the 
remaining one went up to 46%, pagedel was extremely fast. I then added a new 
local (on a new, different, much faster controller). Usage of that one stayed 
at 0% for a long time, which also surprised me.

4) I like the ASM health check that tells us that usage is 30% or more. (In 
fact, I send an automated exception email every time this happens.) I hate that 
ASM does not recognize that a  new page data set was added. That health check 
stupidly doesn't recognize a changed config and still spits out the warning. 
ASM also doesn't do anything active to level out slot usage on a new local. 
Usage only levels out after the next IPL.

5) I wonder if the behaviour I witnessed is due to applications (written by 
clickers with no z/OS clue) taking *a lot* - in the GB range - of above- 
the-bar storage, getting that backed by 'initializing' and then never use it, 
causing all those backed frames to become slots for the live of the IPL. Yes, I 
am talking primarily about the stuff that has feet in OMVS, where typically 
clickers write the code.

6) I bemoan IBMs failure to give us a good means of figuring out who is using 
HVSHARE or HVCOMMON storage and how much storage-above-the-bar is actually 
*used*, i.e. backed. As far as I know, there still isn't any tracking done for 
HVCOMMON storage, no means of reporting about it. No way to know who is 
excessively using common storage above the bar. Same for HVSHARE. Unless you're 
named Jim Mulder and know where to look in a dump, us lowly customers cannot 
even check that in a dump. Am I mistaken in the reporting capabilities? Has 
that been fixed by now? Or is it another means of IBM trying to sell $$$$ 
software service contracts to get that done only by IBM? Not to mention the 
frustration until you find someone who can actually *do* it.

Barry, thank you very much for pointing out the MXG SLOTUTIL thing. I am now 
off to reading in the TYPE71 records and doing nice coloured pictures!

Barbara

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to