On 10/16/19 3:32 PM, Ilya Maximets wrote:
> On 16.10.2019 13:13, Maxime Coquelin wrote:
>>
>>
>> On 10/15/19 8:59 PM, Flavio Leitner wrote:
>>> The rte_vhost_dequeue_burst supports two ways of dequeuing data.
>>> If the data fits into a buffer, then all data is copied and a
>>> single linear buffer is returned. Otherwise it allocates
>>> additional mbufs and chains them together to return a multiple
>>> segments mbuf.
>>>
>>> While that covers most use cases, it forces applications that
>>> need to work with larger data sizes to support multiple segments
>>> mbufs. The non-linear characteristic brings complexity and
>>> performance implications to the application.
>>>
>>> To resolve the issue, add support to attach external buffer
>>> to a pktmbuf and let the host provide during registration if
>>> attaching an external buffer to pktmbuf is supported and if
>>> only linear buffer are supported.
>>>
>>> Signed-off-by: Flavio Leitner <f...@sysclose.org>
>>> ---
>>> doc/guides/prog_guide/vhost_lib.rst | 35 +++++++++
>>> lib/librte_vhost/rte_vhost.h | 4 +
>>> lib/librte_vhost/socket.c | 22 ++++++
>>> lib/librte_vhost/vhost.c | 22 ++++++
>>> lib/librte_vhost/vhost.h | 4 +
>>> lib/librte_vhost/virtio_net.c | 109 ++++++++++++++++++++++++----
>>> 6 files changed, 182 insertions(+), 14 deletions(-)
>>>
>>> - Changelog:
>>> v5:
>>> - fixed to destroy mutex if incompatible flags
>>> v4:
>>> - allow to use pktmbuf if there is exact space
>>> - removed log message if the buffer is too big
>>> - fixed the length to include align padding
>>> - free allocated buf if shinfo fails
>>> v3:
>>> - prevent the new features to be used with zero copy
>>> - fixed sizeof() usage
>>> - fixed log msg indentation
>>> - removed/replaced asserts
>>> - used the correct virt2iova function
>>> - fixed the patch's title
>>> - OvS PoC code:
>>> https://github.com/fleitner/ovs/tree/rte_malloc-v3
>>> v2:
>>> - Used rte_malloc() instead of another mempool as suggested by
>>> Shahaf.
>>> - Added the documentation section.
>>> - Using driver registration to negotiate the features.
>>> - OvS PoC code:
>>>
>>> https://github.com/fleitner/ovs/commit/8fc197c40b1d4fda331686a7b919e9e2b670dda7
>>>
>>
>> Applied to dpdk-next-virtio/master.
>
> Thanks Maxime,
>
> But can we return back the mbuf allocation failure message?
Good point, I missed it.
> I mean:
>
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 66f0c7206..f8af4e0b3 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1354,8 +1354,11 @@ virtio_dev_pktmbuf_alloc(struct virtio_net *dev,
> struct rte_mempool *mp,
> {
> struct rte_mbuf *pkt = rte_pktmbuf_alloc(mp);
>
> - if (unlikely(pkt == NULL))
> + if (unlikely(pkt == NULL)) {
> + RTE_LOG(ERR, VHOST_DATA,
> + "Failed to allocate memory for mbuf.\n");
> return NULL;
> + }
>
> if (rte_pktmbuf_tailroom(pkt) >= data_len)
> return pkt;
> ---
>
> It's a hard failure that highlights some significant issues with
> memory pool size or a mbuf leak.
I agree, we need this error message.
> We still have the message for subsequent chained mbufs, but not
> for the first one. Without this error message we could never
> notice that something is wrong with our memory pool. Only the
> network traffic will stop flowing.
> The message was very useful previously while catching the root
> cause of the mbuf leak in OVS.
>
> I could send a separate patch for this if needed.
We need a separate patch.
If you want to do it, please do! Otherwise I'll do it.
Thanks!
Maxime
> Best regards, Ilya Maximets.