[email protected] (Paul Gilmartin) writes:
> ATTACH/DETACH appeared contemporaneously with TSO!?  I'm astonished!
> I'd have guessed they were much older, perhaps even aboriginal OS/360.
> Was there no multiprocessing mechanism older than TSO?  RYO, I suppose.
> That's what I understand JES and CICS (others?) do.

attach/detach predated tso ... but TCB overhead (and other os/360
services) was/is really heavyweight and so most subsystems attempted to
avoid using as much of os/360 as possible (not just TCB)

hasp ran as its own subsystem. at univ, i hacked hasp for mvt 15/16
(joint release because 15 was slipping) to put in 2741/tty terminal
support and wrote editor supporting the cp67/cms syntax (complete
rewrite since environments were so different) ... as enhanced CRJE (thot
it was much better than later tso) ... also removed unneeded code
(including 2780) in hasp to reduce the real storage
footprint. misc. past posts mentioning HASP (and hasp&jes2 networking)
http://www.garlic.com/~lynn/submain.html#hasp

CICS ran as its own single task ... doing everything it could to avoid
os/360 services ... because they were too heavyweight and too much
overhead. univ. library got ONR grant for online catalog ... used part
of the money to buy 2321 datacell. Project was also selected as betatest
site for original cics product ... and i got tasked with cics
support/debug. running as single os/360 task (and doing its own
scheduling internally) was one of the reasons why CICS was so long in
coming up out multiprocessor support (it ran its own internal
multithreaded scheduler ... but single TCB would only dispatch on single
cpu). in early part of century, i know some installations that ran over
hundred CICS instances as a work around to lack of multiprocessor
support.  cics would also do all its opens at startup and simulate its
own open/close. misc. past posts mentioning cics (&/or bdam)
http://www.garlic.com/~lynn/submain.html#cics

cics history gone 404, but lives on at wayback machine
http://web.archive.org/web/20080123061613/http://www.yelavich.com/history/toc.htm
cics multiprocessor exploitation 2004
http://web.archive.org/web/20090107054344/http://www.yelavich.com/history/ev200402.htm

univ had 709/1401 where 1401 was unit record front-end for 709 ... with
tapes being manually moved between 1401 & 709 drives. 709 ran
tape-to-tape and student fortran job typically took under a second
elapsed time. univ. was convinced to buy 360/67 as replacement tss/360.
tss/360 never quite made it to production so machine ran as 360/65 as
most of the time.

initial transition to 360/65 with os/360 ... the student fortran jobs
were taking over a minute (3step fortran g compile linkedit and
. introduction of hasp got it down close to 30 seconds. I started doing
careful reordered stage2 sysgens with release 11 (optimized arm seek and
pds directory placement) which got nearly 3fold increase. part of
presentation at 1968 FALL SHARE (also includes some numbers of major
rewrite I had done for cp67)
http://www.garlic.com/~lynn/94.html#18

aka ... nearly all the elapsed time was job scheduler overhead. Leading
up to release 11 ... IBM senior SE on the account was writting a single
step monitor that used *attach* (trying to do compile, link-edit and
execute w/o having to go through job step scheduling). However, in that
timeframe we installed WATFOR ... which would do fortran compile and
execute multiple student jobs in single step. On 360/65 w/hasp it ran
about 20,000 "cards" per minute.

vanilla os/360 w/hasp ran 3-step student fortran jobs about 35seconds
elapsed

one stop monitor (using *attach*) could have got it down around
12seconds elapsed

watfor one-step could do 100 jobs in about 20seconds elapsed

on my customed hand-built system could be further reduce to about
12seconds for 100 jobs with WATFOR.

aka job scheduler was enormously disk arm intensive ... along with heavy
use of multi-load transient SVCs ... being brought in 2k at a time.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to