Event handling currently works for all run cores[1] except JIT.
The JIT core can't use the schemes described below, but we could:
1) explicitely insert checks, if events are to be handled
1a) everywhere or
1b) in places like described below under [1] c)
2) Patch the native opcodes at these places with e.g. int3 (SIG_TRAP, debugger hook) cpu instruction and catch the trap. Running the event handler (sub) from there should be safe, as we are in a consistent state in the "run loop".
3) more ideas?
What I'd planned for with events is a bit less responsive than the system you've put together for the non-JIT case, and I think it'll be OK generally speaking.
Ops fall into three categories:
1) Those that don't check for events 2) Those that explicitly check for events 3) Those that implicitly check for events
ops like "add_i_i_i" are in category one. No event checking, people can deal.
Ops like spin_in_event_loop (or whatever we call it) or checkevent is in category two. They check events because, well, that's what they're supposed to do. Compilers should emit these with some frequency, though it's arguable how frequent they ought to be.
Ops in the third category are a bit trickier. Anything that sleeps or waits should spin on the event queue -- arguably the notice that whatever it's waiting for (time, something to complete) should come in *on* the event queue, and it ought to just dig through the events until it gets to the one it was waiting for. Most of the event checking will likely be done in these ops, as programs do IO.
The big thing to ponder is which ops ought go in category three. I can see the various invoke ops doing it, but beyond that I'm up in the air. This is something I'd like to use the ops modifiers on -- we can throw in "checkevent" or something on the ops that should automatically check events and see how performance looks.
--
Dan
--------------------------------------"it's like this"------------------- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk