We always seemed to be at the limit of our res pack, so after a new
install I would run a job that to compress and trim (to zero) most PDS's
with the idea that they would just go into extents as needed if updates
were made. We rarely updated a res pack in place, so there was usually
no problem with this plan. However, I believe we did leave room in some
of the datasets people mentioned here (LINKLIB and similar) which saved
us from LLA issues if we need to make a mod to a running system.
These compress/trim jobs were outside the realm of the ServerPac dialog
of course.
On 7/21/2021 1:08 AM, Barbara Nitz wrote:
Hi Marna,
Making everything bigger is not a good option. Not everything "needs" to be bigger,
but there are those that even 40% won't be enough. <snip> I think creating z/OSMF
product delivery without the ability to change the size and location of the datasets (easily)
is a bad idea.
I am with Brian on the subject of sizing. I would allocate lnklst (target)
datasets without extents at 200% of what is used right now. And increase
directory sizes accordingly. And remember to increase sizes for the
corresponding DLIB data sets. But that won't help the x37 completely (and I
don't think that there is definite help for it). If it helps, I can compile a
list of which data sets blew up with x37 over the course of installing 2.3 and
8 refreshs (we do a refresh twice a year). IIRC, the datasets where new
functions went in blew up.
I don't much care how converting PDS to PDSE is handled, but I also think that
z/OSMF absolutely *MUST* provide the ability to edit the jobs before they are
submitted.
I am wondering, if it might be of better use to have the capability of
accommodating the need for more space in a more ongoing manner?
What would help in my opinion would be a pre-apply hold action for each dataset
that might blow up because a lot got changed. Then it is my responsibility to
increase the size before apply. Or z/OSMF can take a look and automatically
reallocate the target and DLIB data set, copy the old stuff in and then use the
new stuff for the actual apply.
Of course, that assumes that apply is always done into inactive target data
sets. Mine are not even catalogued. Apply of a large set of ptfs into the
active system is a bad idea anyway, in my opinion.
My idea probably also runs counter to the way the full system replacement is
set up, both in the dialogs and in z/OSMF, because they use the data sets the
install went into for IPL.
Regards, Barbara
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN