> ... there is plenty of software that assumes that an interrupt does not 
> happen before a single instruction have been executed after the previous 
> interrupt, from the same device, for example.

On the ibm1130 (a different machine of course) we found a case where a driver 
expected to execute quite a few instructions ... many hundreds ... before an 
interrupt could occur in real hardware, while zeroing or copying a buffer for 
instance. So it might be good to do what simh does and make the delay between 
initiating an IO operation and signaling its completion an adjustable 
parameter. Start with a realistically large number and see if reducing it 
causes failures. I would guess you would have fewer dependencies like this in 
drivers for hardware whose latency was random (eg disk seek and rotation) and a 
bigger risk that authors might  have assumed and exploited the delay time on 
operations that had a fixed cycle time (block to block or sector to sector 
times, or punched card operations)

_______________________________________________
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to