[email protected] (Tony Harminc) writes:
> The overlay scheme used in HASP II had fixed-sized modules that were
> read into an available area without relocation. If the space was
> needed, when the first module got control again it could be loaded at
> a different address. But the trick was that these tasks were never
> preempted, so it was permissible to have a register containing an
> address within the module, as long as it was made relative before
> (loosely) calling the dispatcher, which might result in relocation.

re:
http://www.garlic.com/~lynn/2014d.html#25 [OT ] Mainframe memories
http://www.garlic.com/~lynn/2014d.html#27 [OT ] Mainframe memories
http://www.garlic.com/~lynn/2014d.html#30 [OT ] Mainframe memories

for other topic drift ... I first modified HASP for release 15/16 to add
2714 & tty terminal support for online conversational editor
... implementing CMS editor syntax (had to be redone from scratch since
cms execution/programming environment was completely different than
hasp). of course I thought it was much better than what they came out
with for TSO. past posts mentioning HASP, HASP networking, JES2, and/or
NJE
http://www.garlic.com/~lynn/submain.html#hasp

that summer, I was sucked into going to Boeing (still an undergraduate)
to help setup Boeing Computer Services (consolidate dataprocessing in
independent business unit to better monitize the investment). 747#3 was
flying the skies of seattle for FAA certification.

I thought that the renton datacenter was possibly the largest in the
world (several hundred million in 360s), that summer there was flow of
360/65s constantly coming in, faster than could be installed ... there
were alwyas pieces of 360/65s being staged in the hallways around the
machine roomr. There was a disaster scenario where Mt. Rainer heats up
causing a mudslide that takes out the renton datacenter. The estimate
was the loss of the renton datacenter for a week would cost the company
more than the cost of the renton datacenter ... so they were in the
process of replicating it at the new 747 plant up in everett.

they also got a 360/67 in corporate datacenter (across from boeing
field) previously only had a single 360/30 for running company payroll.

that summer I modified cp67 to support "pageable" kernel. The standard
cp67 kernel was fix-loaded at boot time. I modified low-useage pieces of
the kernel into fixed sized 4kbyte page sizes ... which could use the
standard paging i/o system for bringing in and removing. However, the
cp67 kernel ran non-translate mode ... so the changes were somewhat
analogous to what you describe for HASP II. While a lot of my code from
undergraduate days were picked up and shipped in CP67 ... the pageable
kernel change didn't showup in the product until vm370.

posts mentioning dynamic adaptive resource management
http://www.garlic.com/~lynn/subtopic.html#fairshare
posts mentioning kernel paging & algorithm rewrites
http://www.garlic.com/~lynn/subtopic.html#wsclock

that summer they also brought the duplex (multiprocessor) 360/67 up to
seattle from boeing hunstville. it had been originally ordered for
tss/360 ... but never got to point of production use. As a result
Huntsville, starting out running the duplex as two 360/65 with
os/360. The primary application was numerous 2250 graphic devices used
for physical design.  The problem was that OS/360 had fragmentation with
storage allocation that significantly worsened for long running
applications.

Boeing Hunstville had modified OS/360 MVT release 13 ... to run in
virtual memory mode on the 360/67 ... it didn't actually use virtual
memory for paging operations ... it just just the virtual memory
hardware to address the OS/360 storage fragmentation problem
(exasperated by long running applications).

I've mentioned before that there were a number os/360 subsystems done
during that period ... as work around to significant os/360 problems
... including CICS ... both the enormous pathlength overhead of many
os/360 services ... but also things like storage fragmentation. Other
trivia drift ... Univ. library had gotten ONR grant to do an online
catalog ... some of the money was used to get a 2321 datacell. The
effort was also tagged to be one of the original CICS product betatest
sites ... and I was tasked to support/debug CICS for the project. misc.
past posts mentioning CICS (&/or BDAM)
http://www.garlic.com/~lynn/submain.html#cics

For other drift ... later I got to know John Boyd and sponsored his
briefings at IBM. His biographies mention that Boyd did a stint in command
of "spook base" (about the time I was at Boeing) including a comment
that it was a $2.5B "windfall" for IBM (over $17B in today's dollars)
.... nearly order of magnitude more than renton datacenter.

old description of spook base, gone 404 ... but lives on at wayback
machine
http://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html</a>

past Boyd posts & URL references from around the web
http://www.garlic.com/~lynn/subboyd.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to