Make sense, Thanks.
Cheers
JJ
> -Original Message-
> From: Maxime Coquelin [mailto:maxime.coque...@redhat.com]
> Sent: Wednesday, January 31, 2018 4:13 PM
> To: Chen, Junjie J ; Victor Kaplansky
>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v5] vhost_user:
Hi,
On 01/31/2018 07:51 AM, Chen, Junjie J wrote:
Hi
May I know why not use trylock also in enqueue path?
Because if rte_vhost_enqueue_burst() returns 0, the caller is likely to
drop the packets. This is what happens for example with OVS:
static void
__netdev_dpdk_vhost_send(struct netdev *ne
Hi
May I know why not use trylock also in enqueue path?
Cheers
JJ
>
> When performing live migration or memory hot-plugging, the changes to the
> device and vrings made by message handler done independently from vring
> usage by PMD threads.
>
> This causes for example segfaults during live-mi
On Wed, Jan 17, 2018 at 03:49:25PM +0200, Victor Kaplansky wrote:
> When performing live migration or memory hot-plugging,
> the changes to the device and vrings made by message handler
> done independently from vring usage by PMD threads.
>
> This causes for example segfaults during live-migratio
On 01/17/2018 02:49 PM, Victor Kaplansky wrote:
When performing live migration or memory hot-plugging,
the changes to the device and vrings made by message handler
done independently from vring usage by PMD threads.
This causes for example segfaults during live-migration
with MQ enable, but in
When performing live migration or memory hot-plugging,
the changes to the device and vrings made by message handler
done independently from vring usage by PMD threads.
This causes for example segfaults during live-migration
with MQ enable, but in general virtually any request
sent by qemu changing
6 matches
Mail list logo