Glib's internal buffering implementation is opaque. It seems to grow up when receiving large chunk of data (eg. by guest-file-write). When that buffer become large, the glib main loop will stop notifying callback for small incoming chunk (eg. by guest-file-close) but wait until buffer is filled up again. In this situation, qemu-ga will simply looks like hang/dead from the client side of view.
By disabling the buffer, qemu-ga will deal with each incoming chunk with no delay. So the client side will no longer face the situation where an issued command doesn't come back with a response. Signed-off-by: WANG Chao <wcw...@gmail.com> --- qga/channel-posix.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/qga/channel-posix.c b/qga/channel-posix.c index 8aad4fe..a2ca161 100644 --- a/qga/channel-posix.c +++ b/qga/channel-posix.c @@ -116,6 +116,7 @@ static int ga_channel_client_add(GAChannel *c, int fd) client_channel = g_io_channel_unix_new(fd); g_assert(client_channel); g_io_channel_set_encoding(client_channel, NULL, &err); + g_io_channel_set_buffered(client_channel, false); if (err != NULL) { g_warning("error setting channel encoding to binary"); g_error_free(err); @@ -230,14 +231,6 @@ GIOStatus ga_channel_write_all(GAChannel *c, const gchar *buf, gsize size) size -= written; } - if (status == G_IO_STATUS_NORMAL) { - status = g_io_channel_flush(c->client_channel, &err); - if (err != NULL) { - g_warning("error flushing channel: %s", err->message); - return G_IO_STATUS_ERROR; - } - } - return status; } -- 2.3.0