On 04/23/2013 04:59 PM, Paolo Bonzini wrote:
Il 23/04/2013 03:55, mrhi...@linux.vnet.ibm.com ha scritto:
+static size_t qemu_rdma_get_max_size(QEMUFile *f, void *opaque,
+                                     uint64_t transferred_bytes,
+                                     uint64_t time_spent,
+                                     uint64_t max_downtime)
+{
+    static uint64_t largest = 1;
+    uint64_t max_size = ((double) (transferred_bytes / time_spent))
+                            * max_downtime / 1000000;
+
+    if (max_size > largest) {
+        largest = max_size;
+    }
+
+    DPRINTF("MBPS: %f, max_size: %" PRIu64 " largest: %" PRIu64 "\n",
+                qemu_get_mbps(), max_size, largest);
+
+    return largest;
+}
Can you point me to the discussion of this algorithmic change and
qemu_get_max_size?  It seems to me that it assumes that the IB link is
basically dedicated to migration.

I think it is a big assumption and it may be hiding a bug elsewhere.  At
the very least, it should be moved to a separate commit and described in
the commit message, but actually I'd prefer to not include it in the
first submission.

Paolo


Until now, I stopped using our 40G hardware (only 10G hardware).

But when I switched back to our 40G hardware, the throughput
was being artificially limited to < 10G.

So, I started investigating the problem, and I noticed that whenever
I disabled the limits of max_size, the throughput went back to
the normal throughput (peak of 26 gbps).

So, rather than change the default max_size calculation for TCP,
which would improperly impact existing users of TCP migration,
I introduced a new QEMUFileOps change to solve the problem.

What do you think?

- Michael





Reply via email to