Hi,
On 4/1/19 5:00 PM, Juergen Gross wrote:
On 01/04/2019 17:15, Julien Grall wrote:
Hi,
On 4/1/19 3:23 PM, Juergen Gross wrote:
On 01/04/2019 16:01, Julien Grall wrote:
Hi,
On 4/1/19 2:33 PM, Juergen Gross wrote:
On 01/04/2019 15:21, Julien Grall wrote:
Hi Juergen,
On 4/1/19 11:37 AM, Juergen Gross wrote:
On 01/04/2019 12:29, Julien Grall wrote:
Hi,
On 4/1/19 10:40 AM, Juergen Gross wrote:
On 01/04/2019 11:21, Julien Grall wrote:
Hi,
On 3/29/19 3:08 PM, Juergen Gross wrote:
cpu_disable_scheduler() is being called from __cpu_disable()
today.
There is no need to execute it on the cpu just being disabled,
so use
the CPU_DEAD case of the cpu notifier chain. Moving the call
out of
stop_machine() context is fine, as we just need to hold the
domain
RCU
lock and need the scheduler percpu data to be still allocated.
Add another hook for CPU_DOWN_PREPARE to bail out early in case
cpu_disable_scheduler() would fail. This will avoid crashes in
rare
cases for cpu hotplug or suspend.
While at it remove a superfluous smp_mb() in the ARM
__cpu_disable()
incarnation.
This is not obvious why the smp_mb() is superfluous. Can you
please
provide more details on why this is not necessary?
cpumask_clear_cpu() should already have the needed semantics, no?
It is based on clear_bit() which is defined to be atomic.
atomicity does not mean the store/load cannot be re-ordered by the
CPU.
You would need a barrier to prevent re-ordering.
cpumask_clear_cpu() and clear_bit() does not contain any barrier, so
store/load can be re-ordered.
Uh, couldn't this lead to problems, e.g. in vcpu_block()? The comment
there suggests the sequence of setting the blocked bit and doing the
test is important for avoiding a race...
Hmmm... looking at the other usage (such as in do_poll), on non-x86
platform, there is a smp_mb() between set_bit(...) and checking the
event with a similar comment above.
I don't know enough the scheduler code to know why the barrier is
needed. But for consistency, it seems to me the smp_mb() would be
required in vcpu_block() as well.
Also, it is quite interesting that the barrier is not presence for
x86.
If I understand correctly the comment on top of set_bit/clear_bit, it
could as well be re-ordered. So we seem to relying on the underlying
implementation of set_bit/clear_bit.
On x86 reads and writes can't be reordered with locked operations (SDM
Vol 3 8.2.2). So the barrier is really not needed AFAIU.
include/asm-x86/bitops.h:
* clear_bit() is atomic and may not be reordered.
I interpreted the "may not" as you should not rely on the re-ordering to
not happen.
In place were re-ordering should not happen (e.g test_and_set_bit) we
use the wording "cannot".
The SDM is very clear here:
"Reads or writes cannot be reordered with I/O instructions, locked
instructions, or serializing instructions."
This is what the specification says not the intended semantic. Helper
may have a more relaxed semantics to accommodate other architecture.
I believe, this is the case here. The semantic is more relaxed than the
implementation. So you don't have to impose a barrier in architecture
with a more relaxed memory ordering.
Wouldn't it make sense to try to uniformize the semantics? Maybe by
introducing a new helper?
Or adding the barrier on ARM for the atomic operations?
On which basis? Why should we impact every users for fixing a bug in
the scheduler?
I'm assuming there are more places like this either in common code or
code copied verbatim from arch/x86 to arch/arm with that problem.
Adding it in the *_set helpers is just the poor's man fix. If we do that
this is going to stick for a long time and impact performance.
Instead we should fix the scheduler code (and hopefully only that) where
the ordering is necessary.
I believe that should be a patch on its own. Are you doing that?
I will try to have a look tomorrow.
So I take it you'd rather let me add that smp_mb() in __cpu_disable()
again.
Removing/Adding barriers should be accompanied with a proper
justifications in the commit message. Additionally, new barrier should
have a comment explaining what they are for.
In this case, I don't know what is the correct answer. It feels to me we
should keep it until we have a better understanding of this code. But
Okay.
then it raises the question whether a barrier would also be necessary
after calling cpu_disable_scheduler().
That one is quite easy: all paths of cpu_disable_scheduler() are doing
an unlock operation at the end, so the barrier is already there.
Oh, nothing to worry then :). Thank you for look at it.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel