On 06/11/2018 03:57 AM, Juergen Gross wrote:
> The max number of slots used in xennet_get_responses() is set to
> MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD).
>
> In old kernel-xen MAX_SKB_FRAGS was 18, while nowadays it is 17. This
> difference is resulting in frequent messages "too many slots" and a
> reduced network throughput for some workloads (factor 10 below that of
> a kernel-xen based guest).
>
> Replacing MAX_SKB_FRAGS by XEN_NETIF_NR_SLOTS_MIN for calculation of
> the max number of slots to use solves that problem (tests showed no
> more messages "too many slots" and throughput was as high as with the
> kernel-xen based guest system).
>
> Signed-off-by: Juergen Gross <jgr...@suse.com>

Reviewed-by: Boris Ostrovsky <boris.ostrov...@oracle.com>

I wonder also whether netfront_tx_slot_available() is meant to be

return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
                (NET_TX_RING_SIZE - XEN_NETIF_NR_SLOTS_MIN - 1);

which is the same numeric value but provides a more accurate description
of what is being tested.

-boris


> ---
>  drivers/net/xen-netfront.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 679da1abd73c..ba411005d829 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -790,7 +790,7 @@ static int xennet_get_responses(struct netfront_queue 
> *queue,
>       RING_IDX cons = queue->rx.rsp_cons;
>       struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
>       grant_ref_t ref = xennet_get_rx_ref(queue, cons);
> -     int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
> +     int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
>       int slots = 1;
>       int err = 0;
>       unsigned long ret;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to