[email protected] (Tony Harminc) writes:
> Not quite sure what you're saying. The old, constrained-memory
> technique was usually to issue a variable (Vx) GETMAIN, specifying the
> minimum required size as the low bound, and the maximum useful as the
> high. Then the system returns the actual amount obtained, or a return
> code or abend if it can't deliver even the minimum. In pre-MVS
> systems, the REGION= was a hard control on the max; in MVS, REGION=
> turned into a per-use limit on Vx GETMAINs, with the hard limit being
> the available private area.

pre-MVS, ... the application storage management was so bad it required
regions typically four times larger than actually used ... typical
1mbyte 370/165 would only run four regions ...  and systems were
becoming increasingly I/O bound (i.e. CPUs were getting increasingly
faster than disks were getting faster, keeping high-end CPUs buzy
required lots more concurrent multitasking).

justification for moving all 370 to virtual memory was that it would be
possible to increase the number of regions on 1mbyte 370/165 by a factor
of four times with little or no paging. old post with excerpts with
person involved in decision
http://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

trivia: I had big argument with the POK people doing the page
replacement algorithm. They eventually said it didn't matter anyway
... because there would be almost *NO* paging. It was several years into
MVS releases, that somebody "discovers" that side-effect of the
implementation, MVS was replacing high-used shared LINKPACK pages before
low-used application data pages. past posts mentioning paging algorithm
implementations
http://www.garlic.com/~lynn/subtopic.html#clock

MVS ran into a different kind of problem ... os/360 api paradigm was
extensively pointer passing. As a result, an 8mbyte image of the MVS
system was included in every application 16mbyte virtual address space
(pointer passed in system call and since system code was part of every
address space, the old os/360 use of directly using pointer address
continued to work).

Problem was that subsystem APIs were also pointer based ... and now they
were all in their own address space. MVS subsystem calls then invented
the common segment area (CSA) that appeared in every address space and
was used for allocating space for subsystem API parameters (reducing
applications space to 7mbytes, out of 16mbytes). However, CSA size
requirements were somewhat proportional to the number of subsystems and
the number of concurrent applications .... and CSA frequently became
multiple segments and morphed into common system area. By 3033, CSA
space requirements was frequently 5-6mbytes for many customers (leaving
only 2mbytes for applications) and threatening to become 8mbytes
(leaving zero bytes for applications)

Eventually part of 370-xa access registers were retrofitted to 370 as
dual-address space mode .... subsystems could be enhanced to directly
access application space w/o requiring CSA (person responsible for
dual-access retrofit leaves not long after for HP to work on their risc
processors).

other triva: early 80s I was saying that disk performance not tracking
was so bad that since the 60s, relative disk system throughput had
declined by a factor of ten times (i.e. disks got 3-5 times faster,
processors got 50 times faster). disk division took exception and
assigned division performance organization to refute my claims. after a
few weeks, they came back and basically said that I had understated the
"problem". They respin the analysis into SHARE presentation (B874 at
SHARE 63) recommending disk configurations to improve throughput.
old post with part of the early 80 comparison
http://www.garlic.com/~lynn/93.html#31
old posts with pieces of B874
http://www.garlic.com/~lynn/2001l.html#56
http://www.garlic.com/~lynn/2006f.html#3

for 3081 & 3880-11 paging caches ... processor memory were becoming
compareable in size ... or even larger than paging area ... so I did a
variation that dynamically switched between what I called "dup"
(duplicate) and "no-dup". "dup", when page was read into processor
memory, the original was left allocated (duplicate in memory and on
disk), if that page was later replaced and had not been changed, then it
could just be invalidated (and didn't have to be written out) since an
exact copy was already on disk. For large processor memory, it could
dynamically switch to "no-dup" and read into processor would always
deallocate the copy on disk (and would use 3880-11 no-cache read, if a
copy was in cache, it would be read and removed from cache, if not in
cache, it would read from disk bypassing cache).

The "dup" issue was if aggregate 3880-11 wasn't much larger than
processor memory, then nearly every page in 3880-11 would also in
processor memory.  The converse if a page was needed not in processor
memory, then it would unlikely be in 3880-11 cache memory. Moving to
"no-dup" means that a page in processor memory would almost never be in
3880-11 cache, so there is room for pages (not in processor memory) that
might be needed in processor memory.

trivia: search engine for 3880-11 turns up mostly my old posts ... but
there is this "IBM" item ... pg32 ... but says 3880-11 32mbyte, 3880-11
was only 8mbyte cache, to get 32mbyte required four controllers. Later
3880-21 had 32mbyte cache.
https://www-01.ibm.com/events/wwe/grp/grp019.nsf/vLookupPDFs/7%20-%20VM-45-JahreHistory-EA-J-Elliott%20%5BKompatibilit%C3%A4tsmodus%5D/$file/7%20-%20VM-45-JahreHistory-EA-J-Elliott%20%5BKompatibilit%C3%A4tsmodus%5D.pdf

I've used similar dup/no-dup analysis (for large processor memories) up
to current day. past posts mentioning 3880-11/ironwood, dup/no-dup:
http://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2007c.html#0 old discussion of disk controller 
chache
http://www.garlic.com/~lynn/2010.html#47 locate mode, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010i.html#20 How to analyze a volume's access by 
dataset
http://www.garlic.com/~lynn/2011.html#68 Speed of Old Hard Disks
http://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
http://www.garlic.com/~lynn/2015e.html#18 June 1985 email
http://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain 
Its Current Importance

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to