It turns out that we do in fact have RSB safety here, but not for obvious reasons.
Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com> --- CC: Jan Beulich <jbeul...@suse.com> CC: Roger Pau Monné <roger....@citrix.com> CC: Wei Liu <w...@xen.org> --- xen/common/wait.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/xen/common/wait.c b/xen/common/wait.c index e45345ede704..1a3b348a383a 100644 --- a/xen/common/wait.c +++ b/xen/common/wait.c @@ -210,6 +210,26 @@ void check_wakeup_from_wait(void) } /* + * We are about to jump into a deeper call tree. In principle, this risks + * executing more RET than CALL instructions, and underflowing the RSB. + * + * However, we are pinned to the same CPU as previously. Therefore, + * either: + * + * 1) We've scheduled another vCPU in the meantime, and the context + * switch path has (by default) issued IPBP which flushes the RSB, or + * + * 2) We're still in the same context. Returning back to the deeper + * call tree is resuming the execution path we left, and remains + * balanced as far as that logic is concerned. + * + * In fact, the path though the scheduler will execute more CALL than + * RET instructions, making the RSB unbalanced in the safe direction. + * + * Therefore, no actions are necessary here to maintain RSB safety. + */ + + /* * Hand-rolled longjmp(). * * check_wakeup_from_wait() is always called with a shallow stack, -- 2.11.0