On Fri, 2023-10-20 at 14:06 +0200, Benjamin Beichler wrote:
Am 20.10.2023 um 13:39 schrieb Johannes Berg:
On Fri, 2023-10-20 at 12:38 +0200, Benjamin Beichler wrote:
Can you explain, why a time travel handler for stdin may be bad? It
sounds like you want to avoid it, but I see no immediate problem.
I need to read the thread, but this one's easy ;-)
The thing is that on such a channel you don't send an ACK when you've
seen that there's something to handle. As a result, the sender will
continue running while you're trying to request a new schedule entry
from the controller. As a result, it may run past your new schedule
entry because it didn't know about it yet (this would likely bring down
the controller and crash the simulation), or the relative order of the
two entries is undefined, in the sense that it depends on the process
scheduling of the host.
Sorry, but I did not get this. What may run past the schedule entry? Is
your assumption, that the "thing" connected to stdin is always totally
unaware of the time travel mode?
No, I'm not assuming that, if that's done all bets are off anyway. I'm
assuming that both are connected to the controller.
When our (to be published) simulation send something on serial lines (I
think it does not matter whether it is a socket or a pipe), we expect
that the uml instance needs to be run as long as it changes back to
idle/wait state before the simulation time is advanced. Since the basis
model of the time travel mode is, that you have infinite amount of
processing power, the interrupt needs to be always handled at the
current time.
Yes but you need to schedule for the interrupt, and you don't
necessarily know what 'current time' is at interrupt time.
So let's say you have "something" that's scheduled to run at times
- 1000
- 2000
- 3000
and free-until is 10000 or something high.
Now it sends a message to Linux stdio at 1000.
But it doesn't have a way to wait for ack. So it continues, checks the
schedule and free-until, and can advance time to 2000 since it doesn't
yet know Linux requested time to run at 1000.
Do I get it right, that you anticipate that all parts of the simulation
may run at different paces and the controller is only needed for
synchronization?
My mind model is, that simulation time advances in the controller are
only done, If all parts of the simulation reached the current simulation
time and are at the event processed/idle/wait state. Therefore, the
thing connected to stdio will not advance to some other time, if some
other component has outstanding events (i.e., an interrupt at this
case). Of course free-running is a problematic in this mind model, but
in this mode you willingly sacrifice the precision of events for
performance.
This gives you something not predictable - it depends on the host
scheduling, whichever ran first (Linux started next or the other
continued, or both and it gets messed up).
Maybe my think-model only holds for "smaller" amounts of data (maybe one
page or something?) or not the free-running mode, but I'm not completely
convinced. :-D
Nah it doesn't really matter how much data there is.
Maybe we need to define, a bit more formally, how the (designed)
processing model of interrupts in time travel mode is.
That's probably a separate Master's thesis ;-)
It could, but some basic definition for a common understanding might help.
However, do you like to have a look at my current state of time travel
changes in github or do you think an RFC to the mailing list is better?
I like github for a quick dive into patches, but your whole workflow is
maybe different ;-)
_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um