As I removed automation from my just recently shutdown system, I installed a 
MPF exit from the 
CBT ( I  think maybe file 882). This MPF2REXX just did a F AXR,msgid for the 
messages it was active for.

I suspect I could have extended this to notice tape mount (requests, mounts, 
and releases) to monitor drive allocation and react to a high water limit. 

> -----Original Message-----
> From: IBM Mainframe Discussion List <[email protected]> On
> Behalf Of Mark Jacobs
> Sent: Monday, November 07, 2022 3:22 PM
> To: [email protected]
> Subject: Re: Limiting quantity of tape drives used by user
> 
> [EXTERNAL EMAIL]
> 
> A very long time ago at $previousjob-1 we used an MPF exit for messages
> such as TMS001, IEC101A, IEC501A, IEC501E, IEF233A and IEF233D. This exit
> looked at the tape UCBs for the job name of the newly allocated drive and if
> the job exceeded a defined limit the exit canceled the job.
> 
> I do have the source code available if someone wants to read some real ugly
> code.
> 
> Mark Jacobs
> 
> Sent from ProtonMail, Swiss-based encrypted email.
> 
> GPG Public Key -
> https://urldefense.com/v3/__https://api.protonmail.ch/pks/lookup?op=get
> &[email protected]__;!!JmPEgBY0HMszNaDT!unG2wgFb
> C9Q5AhE5o2JCM5Z8nrV7xUpk8BGtU87QxduUT5LIj-
> gix4ZtTaFtD31RcbIJUy_VEaesvd8bjWKG0Nyp1MITpLkA$
> 
> 
> ------- Original Message -------
> On Monday, November 7th, 2022 at 6:06 PM, Glenn Miller
> <[email protected]> wrote:
> 
> 
> > We encountered the exact same problem recently. In our situation, a batch
> job using the IBM DB2 DSNUTILB attempting to perform a DB2 Reorg of a
> partitioned DB2 Table with 500 partitions. The batch job was submitted by a
> TSO User, not our TWS job scheduler. The job slowly allocated about 470
> virtual tape drives in about 90 minutes until it received IEF877E followed by
> IEF238D. Unfortunately, our automation replied WAIT to the IEF238D and
> replied NOHOLD to the subsequent IDF433D. That caused Operations to be
> unaware of the 'problem' until nearly an hour later when HSM need a virtual
> tape drive.
> > A cancel of that 'mis behaving' batch job cleared up the problem, however
> we have be struggling with finding a way to prevent the problem or alerting
> our Operations folks in real time that a 'not good' situation may be 
> occurring.
> As of today, we don't have a good answer for either part.
> > Glenn Miller
> >
> > ----------------------------------------------------------------------
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [email protected] with the message: INFO IBM-MAIN
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to