On Wed, 19 Nov 2025, at 23:33, Andrew Rowley wrote:

> "Help! We need to apply this PTF urgently, but we discovered that some 
> of the DDDEFs point to a previous version of the product which we 
> cleaned up 3 months ago, and the apply job fails. How do we fix it?"

About 35 years ago I thought this sort of thing needed looked for,
so wrote code which started with a list of CSIs. For each it found
the list of zones within it, then for each of those, all the DDDEFs
within.

It then compared the form of each DDDEF with all the others of the
same name.  Obviously it knew our naming conventions, and expected
to see those used consistently, but also eg looked for all such
defs to be of catalogued datasets, or all to use explicit unit/vol
locations (subject, I expect, to knowing which volumes were online
to which lpars (ie the volume-naming conventions)).  I can't recall
if the code also looked at the type of cataloguing (all explicit 
volsers, or vol(******), & if it looked for certain datasets
either to be on sysreses, or not), nor if I compared lrecl, recfm,
#dir blocks for each dddef.  (I hope I did.)  

I can't recall how often the batch job that did this ran - maybe
weekly?


A simpler version of the code looked just at all of the smplog &
smploga dddefs - enforcing use of named datasets for each (so,
certainly not just sysout).  For any which were getting too full
they were renamed, backed up by HSM & migrated, & a fresh empty
log file created.

This possibly ran every night as of course most smp/e logs would
not get written to much for days/weeks at a time then there'd be
a surge of activity.


Does smp/e now come with utilities to do this sort of thing? 

-- 
Jeremy Nicoll - my opinions are my own.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to