On 3.09.2024 15:55, Stefan Hajnoczi wrote:
On Tue, 27 Aug 2024 at 13:58, Maciej S. Szmigiero
<m...@maciej.szmigiero.name> wrote:

From: "Maciej S. Szmigiero" <maciej.szmigi...@oracle.com>

Migration code wants to manage device data sending threads in one place.

QEMU has an existing thread pool implementation, however it was limited
to queuing AIO operations only and essentially had a 1:1 mapping between
the current AioContext and the ThreadPool in use.

Implement what is necessary to queue generic (non-AIO) work on a ThreadPool
too.

This brings a few new operations on a pool:
* thread_pool_set_minmax_threads() explicitly sets the minimum and maximum
thread count in the pool.

* thread_pool_join() operation waits until all the submitted work requests
have finished.

* thread_pool_poll() lets the new thread and / or thread completion bottom
halves run (if they are indeed scheduled to be run).
It is useful for thread pool users that need to launch or terminate new
threads without returning to the QEMU main loop.

Did you consider glib's GThreadPool?
https://docs.gtk.org/glib/struct.ThreadPool.html

QEMU's thread pool is integrated into the QEMU event loop. If your
goal is to bypass the QEMU event loop, then you may as well use the
glib API instead.

thread_pool_join() and thread_pool_poll() will lead to code that
blocks the event loop. QEMU's aio_poll() and nested event loops in
general are a source of hangs and re-entrancy bugs. I would prefer not
introducing these issues in the QEMU ThreadPool API.


Unfortunately, the problem with the migration code is that it is
synchronous - it does not return to the main event loop until the
migration is done.

So the only way to handle things that need working event loop is to
pump it manually from inside the migration code.

The reason why I used the QEMU thread pool in the first place in this
patch set version is because Peter asked me to do so during the review
of its previous iteration [1].

Peter also asked me previously to move to QEMU synchronization
primitives from using the Glib ones in the early version of this
patch set [2].

I personally would rather use something common to many applications,
well tested and with more pairs of eyes looking at it rather to
re-invent things in QEMU.

Looking at GThreadPool it seems that it lacks ability to wait until
all queued work have finished, so this would need to be open-coded
in the migration code.

@Peter, what's your opinion on using Glib's thread pool instead of
QEMU's one, considering the above things?

Thanks,
Maciej

[1]: https://lore.kernel.org/qemu-devel/ZniFH14DT6ycjbrL@x1n/ point 5: "Worker 
thread model"
[2]: https://lore.kernel.org/qemu-devel/Zi_9SyJy__8wJTou@x1n/


Reply via email to