Am 27.10.2023 um 16:45 schrieb Johannes Berg:
On Fri, 2023-10-27 at 16:05 +0200, Benjamin Beichler wrote:
- besides this, when you look into local_irq_save for um, this does
not really deactivate the interrupts, but delay the processing and it
adds a memory barrier. I think that could be one of the important
consequences as changes to the event list are forced to be populated.
But this is only a guess :-D
No, it *does* disable interrupts from Linux's POV. The memory barrier
is just there to make _that_ actually visible "immediately". Yes it
doesn't actually disable the *signal* underneath, but that doesn't
really mean anything? Once you get the signal, nothing happens if it's
disabled.
Maybe I was a bit sloppy here. What I meant was, that a signal could
interrupt the code
even the interrupts are disabled, resulting into all the nice preemption
consequences.
In real systems that may only do the NMI ?
And especially the SIGIO handler keeps intentionally calling the time
travel handlers
when the interrupts are disabled. The interrupt handlers of drivers are
called later, but the
time travel handlers tend to change the event list.
In UML, the kernel binary is kind of both hypervisor and guest kernel,
so you can think of the low-level code here as being part of the
hypervisor, so that blocking the signal from actually calling out into
the guest is like blocking the PIC from sending to the CPU. Obviously
the device(s) still send(s) the interrupt(s), but they don't go into
the "guest".
- I'm also not really convinced, that all accesses to the current
time should be delayed. I made a patch with a heuristic, that only
delays the time read, if it is read from userspace multiple times in
a row.
How would you even detect that though? And on the other hand - why
not? There's always a cost to things in a real system, it's not free
to access the current time? Maybe it's faster in a real system with
VDSO though.
Since there are no "busy" loops in the kernel itself and only in a
few (bad behaving) userspace programs, I think that helps to reduce
the inaccuracy in simulation from well behaving userspace programs
only reading the time once.
So I'm not sure I'd call it an inaccuracy? It's ... perfectly accurate
in the model that we're implementing? Maybe you don't like the model ;-)
I think I like the model too much, it fits so nicely in common DES
primitives. :-D
We have unlimited processing power (or every processing has zero
execution time),
so why delaying a program which get once a while a timestamp from the
system.
Moreover, since I want a really deterministic model, I anticipate that
if I send a msg
at timestamp t1, my program should create an answer at t1 and not
sporadically at
t1+delta, because the file system driver took a timestamp at a
background task.
In this case, irqs_disabled more or less only tend to indicate, that
the event list could be manipulated, and therefore the update_time
call is a bad idea.
Which is kind of what I was thinking about earlier, but Vincent says
it's not _just_ for that.
Mhh, I think Vincent only wondered about whether the recursion statement
was right.
We probably should just add a spinlock for this though - right now
none of this is good for eventual SMP support, and it's much easier to
reason about when we have a lock?
Mhh I think that would make sense. As I said, we also had problems in
ext-mode, which
disappeared, after I introduce local_irq_save(flags); around all
operation, that modified
the event list. I think a spinlock would show clearer the intention. For
SMP we may
need some fine grain scheme (or RCU-based locking), to prevent
deadlocks, but I'm again
not sure.
johannes
kind regards
Benjamin
_______________________________________________
linux-um mailing list
linux-um@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-um