Cc: Aleksandar Markovic <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Alistair Francis <[email protected]>
Cc: Andrzej Zaborowski <[email protected]>
Cc: Anthony Green <[email protected]>
Cc: Artyom Tarasenko <[email protected]>
Cc: Aurelien Jarno <[email protected]>
Cc: Bastian Koppelmann <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Chris Wulff <[email protected]>
Cc: Cornelia Huck <[email protected]>
Cc: David Gibson <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: "Edgar E. Iglesias" <[email protected]>
Cc: Eduardo Habkost <[email protected]>
Cc: Fabien Chouteau <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Laurent Vivier <[email protected]>
Cc: Marek Vasut <[email protected]>
Cc: Mark Cave-Ayland <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Michael Clark <[email protected]>
Cc: Michael Walle <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Pavel Dovgalyuk <[email protected]>
Cc: Peter Crosthwaite <[email protected]>
Cc: Peter Maydell <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Richard Henderson <[email protected]>
Cc: Sagar Karandikar <[email protected]>
Cc: Stafford Horne <[email protected]>

I'm calling this series a v3 because it supersedes the two series
I previously sent about using atomics for interrupt_request:
  https://lists.gnu.org/archive/html/qemu-devel/2018-09/msg02013.html
The approach in that series cannot work reliably; using (locked) atomics
to set interrupt_request but not using (locked) atomics to read it
can lead to missed updates.

This series takes a different approach: it serializes access to many
CPUState fields, including .interrupt_request, with a per-CPU lock.

Protecting more fields of CPUState with the lock then allows us to
substitute the BQL for the per-CPU lock in many places, notably
the execution loop in cpus.c. This leads to better scalability
for MTTCG, since CPUs don't have to acquire a contended lock
(the BQL) every time they stop executing code.

Some hurdles that remain:

1. I am not happy with the shutdown path via pause_all_vcpus.
   What happens if
   (a) A CPU is added while we're calling pause_all_vcpus?
   (b) Some CPUs are trying to run exclusive work while we
       call pause_all_vcpus?
   Am I being overly paranoid here?

2. I have done very light testing with x86_64 KVM, and no
   testing with other accels (hvf, hax, whpx). check-qtest
   works, except for an s390x test that to me is broken
   in master -- I reported the problem here:
     https://lists.gnu.org/archive/html/qemu-devel/2018-10/msg03728.html

3. This might break record-replay. Furthermore, a quick test with icount
   on aarch64 seems to work, but I haven't tested icount extensively.

4. Some architectures still need the BQL in cpu_has_work.
   This leads to some contortions to avoid deadlock, since
   in this series cpu_has_work is called with the CPU lock held.

5. The interrupt handling path remains with the BQL held, mostly
   because the ISAs I routinely work with need the BQL anyway
   when handling the interrupt. We can complete the pushdown
   of the BQL to .do_interrupt/.exec_interrupt later on; this
   series is already way too long.

Points (1)-(3) makes this series an RFC and not a proper patch series.
I'd appreciate feedback on this approach and/or testing.

Note that this series fixes a bug by which cpu_has_work is
called without the BQL (from cpu_handle_halt). After
this series, cpu_has_work is called with the CPU lock,
and only the targets that need the BQL in cpu_has_work
acquire it.

For some performance numbers, see the last patch.

The series is checkpatch-clean; only one warning due to the
use of __COVERITY__ in cpus.c.

You can fetch this series from:

  https://github.com/cota/qemu/tree/cpu-lock-v3

Note that it applies on top of tcg-next + my dynamic TLB series,
which I'm using in the faint hope that the ubuntu experiments might
run a bit faster.

Thanks!

                Emilio

Reply via email to