On 03/19/2013 05:18 AM, Paolo Bonzini wrote:
Il 18/03/2013 21:33, Michael R. Hines ha scritto:
+int qemu_drain(QEMUFile *f)
+{
+ return f->ops->drain ? f->ops->drain(f->opaque) : 0;
+}
Hmm, this is very similar to qemu_fflush, but not quite. :/
Why exactly is this needed?
Good idea - I'll replace drain with flush once I added
the "qemu_file_ops_are(const QEMUFile *, const QEMUFileOps *) "
that you recommended......
If I understand correctly, the problem is that save_rdma_page is
asynchronous and you have to wait for pending operations to do the
put_buffer protocol correctly.
Would it work to just do the "drain" in the put_buffer operation, if and
only if it was preceded by a save_rdma_page operation?
Yes, the drain needs to happen in a few places already:
1. During save_rdma_page (if the current "chunk" is full of pages)
2. During the end of each iteration (now using qemu_fflush in my current
patch)
3. And also during qemu_savem_state_complete(), also using qemu_fflush.
/** Flushes QEMUFile buffer
*
*/
@@ -723,6 +867,8 @@ int qemu_get_byte(QEMUFile *f)
int64_t qemu_ftell(QEMUFile *f)
{
qemu_fflush(f);
+ if(migrate_use_rdma(f))
+ return delta_norm_mig_bytes_transferred();
Not needed, and another undesirable dependency (savevm.c ->
arch_init.c). Just update f->pos in save_rdma_page.
f->pos isn't good enough because save_rdma_page does not
go through QEMUFile directly - only non-live state goes
through QEMUFile ....... pc.ram uses direct RDMA writes.
As a result, the position pointer does not get updated
and the accounting is missed........
Yes, I am suggesting to modify f->pos in save_rdma_page instead.
Paolo
Would that not confuse the other QEMUFile users?
If I change that pointer (without actually putting bytes
in into QEMUFile), won't the f->pos pointer be
incorrectly updated?