On 07/23/2012 05:28 PM, Sasha Levin wrote:
On 07/23/2012 07:54 AM, Jason Wang wrote:
On 07/21/2012 08:02 PM, Sasha Levin wrote:
On 07/20/2012 03:40 PM, Michael S. Tsirkin wrote:
-    err = init_vqs(vi);
+    if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
+        vi->has_cvq = true;
+
How about we disable multiqueue if there's no cvq?
Will make logic a bit simpler, won't it?
multiqueues don't really depend on cvq. Does this added complexity really 
justifies adding an artificial limit?

Yes, it does not depends on cvq. Cvq were just used to negotiate the number of 
queues a guest wishes to use which is really useful (at least for now). Since 
multiqueue can not out-perform for single queue in every kinds of workloads or 
benchmark, so we want to let guest driver use single queue by default even when 
multiqueue were enabled by management software and let use to enalbe it through 
ethtool. So user could not feel regression when it switch to use a multiqueue 
capable driver and backend.
Why would you limit it to a single vq if the user has specified a different number 
of vqs (>1) in the virtio-net device config?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

The only reason is to prevent the user from seeing the regression. The performance of small packet sending is wrose than single queue, it tends to send more but small packets when multiqueue is enabled. If we make multiqueue bahave as good as single queue, we can remove this limit.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to