Quoting Michael Roth (2013-08-15 10:23:20) > Quoting Wenchao Xia (2013-08-13 03:44:39) > > δΊ 2013-8-13 1:01, Michael Roth ει: > > > Quoting Paolo Bonzini (2013-08-12 02:30:28) > > >>> 1) rename AioContext to AioSource. > > >>> This is my major purpose, which declare it is not a "context" > > >>> concept, > > >>> and GMainContext is the entity represent the thread's activity. > > >> > > >> Note that the nested event loops in QEMU are _very_ different from > > >> glib nested event loops. In QEMU, nested event loops only run block > > >> layer events. In glib, they run all events. That's why you need > > >> AioContext. > > > > > > Would it be possible to use glib for our nested loops as well by just > > > setting a higher priority for the AioContext GSource? > > > > > > Stefan and I were considering how we could make use of his "drop > > > ioflush" patches to use a common mechanism to register fd events, but > > > still allow us to distinguish between AioContext and non-AioContext > > > for nested loops. I was originally thinking of using prepare() functions > > > to filter out non-AioContext events, but that requires we implement > > > on GSource's with that in mind, and non make use of pre-baked ones > > > like GIOChannel's, and bakes block stuff into every event source > > > implementation. > > > > > Besides priority, also g_source_set_can_recurse() can help. > > With a deeper think, I found a harder problem: > > g_main_context_acquire() and g_main_context_release(). In release, > > pending BH/IO call back need to be cleared, but this action can't > > be triggered automatically when user call g_main_context_release(). > > I don't understand why this is a requirement, gmctx_acquire/release ensure > that only one thread attempts to iterate the main loop at a time. this > isn't currently an issue in qemu, and if we re-implemented qemu_aio_wait() > to use the same glib interfaces, the tracking of in-flight requests would > be moved to the block layer via Stefan's 'drop io_flush' patches, which > moves that block-specific logic out of the main loop/AioContext GSource > by design. Are there other areas where you see this as a problem?
I think I understand better what you're referring to, you mean that if qemu_aio_wait was called, and was implementated to essentially call g_main_context_iterate(), that after, say, 1 iteration, the iothread/dataplane thread might acquire the main loop and dispatch block/non-block events between qemu_aio_wait() returned? The simple approach would be to have qemu_aio_wait() call g_main_context_acquire/release at the start end of the function, which would ensure that this never happens. > > > For the above reason, I tend to think, maybe we should t wrap all of > > Glib's mainloop into custom encapsulation, such as QContext, Add the > > aio poll logic in q_context_release(). Use QContext * in every caller > > to hide GMainContext *, so QContext layer play the role of clear > > event loop API. > > > > > Priorities didn't cross my mind though, but it seems pretty > > > straightfoward... > > > > > > AioContext could then just be a container of sorts for managing > > > bottom-halves and AioContext FDs and binding them to the proper > > > GMainContext/MainLoop, but the underlying GSources could > > > still be driven by a normal glib-based mainloop, just with a specific > > > priority in the nested case. > > > > > >> > > >>> 2) Break AioSource into FdSource and BhSource. > > >>> This make custom code less and simpler, one Gsource for one kind of > > >>> job. It is not necessary but IMHO it will make things clear when add > > >>> more things into main loop: add a new Gsource sub class, avoid to > > >>> always have relationship with AioContext. > > >> > > >> But this is only complicating things work since users rely on both file- > > >> descriptor APIs and bottom half APIs. > > > > > > Taking things a step further, maybe AioContext can stop being a > > > block-specific construct, but actually be the "QContext" we've > > > discussed in the past for managing multiple event loops. All > > > the block stuff would be hidden away in the GSource priority. > > > > > > For instance, > > > > > > #ifndef _WIN32 > > > > > > qemu_aio_set_fd_handler(fd, ...): > > > aio_set_fd_handler(qemu_aio_context, fd, ..., QEMU_PRIORITY_BLOCK) > > > > > > qemu_set_fd_handler(fd, ...): > > > aio_set_fd_handler(qemu_aio_context, fd, ..., G_PRIORITY_DEFAULT) > > > > > > #else > > > > > > qemu_add_wait_object(fd, ...): > > > add_wait_object(qemu_aio_context, fd, ...) > > > > > > qemu_set_fd_handler(fd, ...): > > > set_socket_handler(qemu_aio_context, fd, ..., G_PRIORITY_DEFAULT) > > > > > > #endif > > > > > > qemu_bh_schedule: > > > bh_schedule(qemu_aio_context, ...) > > > > > > etc... > > > > > > I'll be sending patches this week for moving > > > add_wait_object/qemu_set_fd_handler to GSources, the non-global ones use > > > GMainContext * to specify a non-default thread/context, but can be easily > > > changed, or we can just do aioctx->g_main_context at the call sites. > > > There's some nice possibilities in using the former though: avoiding > > > O(n) lookups for stuff like finding the GSource for a particular > > > event/event type, for instance, by storing pointers to the GSource or > > > some kind of hashmap lookup. But probably better to discuss that aspect > > > with some context so I'll try to get those patches out soon. > > > > > >> > > >>>>> More reasons: > > >>>>> When I thinking how to bind library code to a thread context, it > > >>>>> may > > >>>>> need to add Context's concept into API of block.c. If I use > > >>>>> AioContext, > > >>>>> there will need a wrapper API to run the event loop. But If I got > > >>>>> glib's GmainContext, things become simple. > > >> > > >> You already have it because AioContext is a GSource. You do not need > > >> to expose the AioContext, except as a GSource. > > >> > > I think expose GmainContext * or QContext *, is better than > > GSource *. > > > > int bdrv_read(GMainContext *ctx, > > BlockDriverState *bs, > > int64_t sector_num, > > uint8_t *buf, > > int nb_sectors) > > > > >> Paolo > > > > > > > > > -- > > Best Regards > > > > Wenchao Xia