[email protected] (Mike Myers) writes:
> I said that once the think time expires, the TSO user's address space
> is swapped out (physically), but only if there is a need to use its
> main storage pages to satisfy the needs of other address spaces. Until
> that time expires and the storage need arises, the address space
> remains logically swapped out (as long as they remain
> "not-ready"). Becoming ready while logically swapped causes them to
> transition from logically swapped out to swapped in.
>
> In today's systems with large main storage, an address space may
> remain logically swapped out for an indefinite period of time. In
> today's world, think time (a setting found in the IEAOPTxx member of
> the system parameter library - xxx.PARMLIB) is still used to determine
> when a logically swapped out address space becomes a "candidate" for a
> physical swap out. Again, that only happens if there is a need to take
> those pages away for someone else.


MFT & MVT conventions was very pointer-passing API oriented. The
transition from MVT to OS/VS2 release 1 ... SVS ... was fairly
straight-forward ... since it was single virtual address space ...  all
pointers were still in the same address space.

The biggest issue was channel programs were built by library code in
application space and passed to the supervisor via EXCP/SVC0 ...  these
were now being built with virtual addresses and the channel requires
channel programs to have real addresses. The initial implementation
involved borrowing CCWTRANS from (virtual machine) CP67 (precursor to
vm370 that ran on 360/67).

I had rewritten cp67 virtual memory operation as undergraduate in the
60s ... and was asked to visit and talk about what was being done for
SVS virtual memory operations. I pointed out several things that they
were doing as flat-out wrong ... but they insisted on going ahead
anyway.

In the transition from SVS to MVS ... each application was given its own
virtual address space ... but an image of the MVS kernel was included
(taking 8mbytes of the 16mbytes virtual address space), in large part
because of the pointer-passing API ... called routine/service needing to
access the parameter list pointed to by the pointer. However, numerous
subsystem services that were outside the kernel ... now also had their
own virtual address space ... and also relied on pointer-passing API
from applications (now in different address space). In order to support
subsystems accessing the application parameters, the common segment was
created ... initially a 1mbyte area that resided in every address space
... in which applications would stuff parameters and being able to
exchange with subsystems in different address space. As systems grew,
requirements for common segment space outgrew one megabyte ... becoming
CSA (common system area). Late 370 period, large customer installations
would have 4-5mbyte CSA threatening to increase to 5-6mbytes
... restricting application to only 2mbytes (out of 16mbytes). To
partially mitigate some of this ... some of the 370-xa architecture
was retrofitted to 3033 as "dual-address space" ... allowing a called
subsystem (in separate adddress space) to access calling application
address space.

Early 1980s, somebody that got award for correcting several virtual
memory problems in MVS (still there from original SVS), contacted me
about retrofitting the fixes to vm370. I commented that I had not done
them the wrong way since undegradudate in the 60s (rewritting cp67)
... so he was out of luck getting award to fixing vm370 also.

About this time, corporate had designated vm370/cms as the official
strategic interactive computing solution (in part because of the huge
explosion in number vm/4300 systems) ... causing convern by the TSO
product administrator. He contacts me about possibly being able to help
by porting my vm370 dynamic adaptive resource manager (also originally
done as undergraudate in the 60s for cp67) to MVS. I point out that MVS
has numerous significant structural problems affecting TSO response &
throughput ... which just putting in my resource manager wouldn't help.
A big issue was multi-track search ... especially for PDS directory
lookup ... which could lock out channel, controller, & drive for 1/3rd
of second at a time. 

I had been called into number of internal and customer MVS installation
accounts where the performance problem turned out to be PDS directory
multi-track search. Trivial example was internal account with both MVS
and VM370 machines with "shared" dasd controlleres ... but with
operational guideline that MVS packs are never mounted on vm370
controller strings. One day it accidentially happened and within five
minutes vm370 users were making irate calls to datacenter ... the MVS
channel program strings were locking out VM370/CMS users, severely
degradding their performnace. MVS operations were asked to move the pack
but refused. So the VM370 group put up a enormously optimized virtual
VS1 system with its pack on MVS string ... which brought the MVS system
to its knees and significantly reduced the MVS impact on CMS I/O
activity.  (MVS operations aggreed to move the MVS pack off the vm370
string as long as vm370 group never put up the VS1 pack on their MVS
string).

misc. old email mentioning TSO product administrator asking me to port
my VM370 dynamic adaptive resource manager to MVS
http://www.garlic.com/~lynn/2006b.html#email800310
http://www.garlic.com/~lynn/2006v.html#email800310b

As an aside, the TSO product administrator obviously didn't know that at
the time, the MVS group appeared to be trying to get me fired. I had
wandered into the disk engineering lab and noted that they were doing
stand-alone, dedicated, 7x24, around the clock scheduled test time on
their mainframes. They previously had tried to use MVS for anytime,
on-demand, concurrent testing ... but in that environment MVS had 15min
MTBF (requiring re-ipl). I offerred to rewrite I/O supervisor to make it
bullet-proof and never fail ... supporint concurrent, anytime, on-demand
testing ... greatly improving their thruoughput. I then wrote up
internal report on the work and happen to mention the MVS 15min MTBF,
bringing down the wrath of the MVS group on my head (which would
periodically seem to reappear during the rest of my career). misc.  past
posts mentioning getting to play disk engineer in bldgs. 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to