On 01/08/15 11:19, Vlad Zolotarov wrote:
>
> On 01/07/15 08:32, Ouyang Changchun wrote:
>> Check mq mode for VMDq RSS, handle it correctly instead of returning
>> an error;
>> Also remove the limitation of per pool queue number has max value of
>> 1, because
>> the per pool queue number could be
Hi, Guys,
I'm trying to run a DPDK(1.7.1) application that has been previously tested
on Xen/VMware VM's. I have both iommu=pt and intel_iommu=on.
I would expect things to work as usual but unfortunately the VF I'm taking
is unable to send or receive any packets (The TXQ gets filled out, and the
pa
My opinion on this is that the lcore_id is rarely (if ever) used to find the
actual core a thread is being run on. Instead it is used 99% of the time as a
unique array index per thread, and therefore that we can keep that usage by
just assigning a valid lcore_id to any extra threads created. The
Hi Steve,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Liang, Cunming
> Sent: Tuesday, December 23, 2014 9:52 AM
> To: Stephen Hemminger; Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [RFC PATCH 0/7] support multi-phtread per lcore
>
> -Original Message-
> From: Richard Sanger [mailto:rsangerarj at gmail.com]
> Sent: Wednesday, January 7, 2015 9:10 PM
> To: Pattan, Reshma
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/3] librte_reorder: New reorder library
>
> On 8 January 2015 at 05:39, Reshma Pattan wrot
I believe there are defects in the code supporting manual configuration of
port link speed and link duplex to values other than auto-negotiate. The
TESTPMD application from DPDK version 1.8.0 was executed on 2 different
systems having 4x1G NICs with the following Ethernet controllers:
* Intel Cor
> -Original Message-
> From: Neil Horman [mailto:nhorman at tuxdriver.com]
> Sent: Wednesday, January 7, 2015 5:45 PM
> To: Pattan, Reshma
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/3] librte_reorder: New reorder library
>
> On Wed, Jan 07, 2015 at 04:39:11PM +, Reshma
On Thu, Jan 08, 2015 at 01:40:54PM +0530, Prashant Upadhyaya wrote:
> Hi,
>
> I am migrating from DPDK1.7 to DPDK1.8.
> My application works fine with DPDK1.7.
> I am using 10 Gb Intel 82599 NIC.
> I have jumbo frames enabled, with max_rx_pkt_len = 10232
> My mbuf dataroom size is 2048+headroom
>
Hi,
I am migrating from DPDK1.7 to DPDK1.8.
My application works fine with DPDK1.7.
I am using 10 Gb Intel 82599 NIC.
I have jumbo frames enabled, with max_rx_pkt_len = 10232
My mbuf dataroom size is 2048+headroom
So naturally the ixgbe_recv_scattered_pkts driver function is triggered for
receivin
On 01/07/15 08:32, Ouyang Changchun wrote:
> This patch enables VF RSS for Niantic, which allow each VF having at most 4
> queues.
> The actual queue number per VF depends on the total number of pool, which is
> determined by the max number of VF at PF initialization stage and the number
> of
>
On 01/07/15 08:32, Ouyang Changchun wrote:
> Set VMDq RSS mode if it has VF(VF number is more than 1) and has RSS
> information.
>
> Signed-off-by: Changchun Ouyang
Reviewed-by: Vlad Zolotarov
Some nitpicking below... ;)
>
> changes in v5
>- Assign txmode.mq_mode with ETH_MQ_TX_NONE expl
On 01/07/15 08:32, Ouyang Changchun wrote:
> It needs config RSS and IXGBE_MRQC and IXGBE_VFPSRTYPE to enable VF RSS.
>
> The psrtype will determine how many queues the received packets will
> distribute to,
> and the value of psrtype should depends on both facet: max VF rxq number which
> has be
On 01/07/15 08:32, Ouyang Changchun wrote:
> Check mq mode for VMDq RSS, handle it correctly instead of returning an error;
> Also remove the limitation of per pool queue number has max value of 1,
> because
> the per pool queue number could be 2 or 4 if it is VMDq RSS mode;
>
> The number of rxq
On 01/08/2015 12:59 AM, Stephen Hemminger wrote:
> On Wed, 7 Jan 2015 14:03:29 +0100
> David Marchand wrote:
>
>> +buf = strdup(devargs_str);
>> +if (buf == NULL) {
>> +RTE_LOG(ERR, EAL, "cannot allocate temp memory for devargs\n");
>> +goto fail;
>> +}
>> +
>
On 01/07/15 08:32, Ouyang Changchun wrote:
> Get the available Rx and Tx queue number when receiving IXGBE_VF_GET_QUEUES
> message from VF.
>
> Signed-off-by: Changchun Ouyang
Reviewed-by: Vlad Zolotarov
>
> changes in v5
>- Add some 'FIX ME' comments for IXGBE_VF_TRANS_VLAN.
>
> ---
>
Hi Frank,
> -Original Message-
> From: Liu, Jijiang
> Sent: Thursday, January 08, 2015 8:52 AM
> To: Ananyev, Konstantin; 'Olivier MATZ'
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 0/3] enhance TX checksum command and csum
> forwarding engine
>
> Hi,
>
> > -Original Me
On 8 January 2015 at 05:39, Reshma Pattan wrote:
> From: Reshma Pattan
>
> 1)New library to provide reordering of out of ordered
> mbufs based on sequence number of mbuf. Library uses reorder
> buffer structure
> which in tern uses two circular buffers called
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> index 27ba175..744156c 100644
> --- a/lib/librte_vhost/virtio-net.c
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -68,7 +68,9 @@ static struct virtio_net_device_ops const *notify_ops;
> static struct virtio_net_config_ll
This patch integrates syn filter to new API in ixgbe/igb driver.
changes:
ixgbe: remove old functions that deal with syn filter
ixgbe: add new functions that deal with syn filter (fit for filter_ctrl API)
e1000: remove old functions that deal with syn filter
e1000: add new functions that deal with
I finally found the time to try this and I noticed that on a server
with 1 NUMA node, this works, but if server has 2 NUMA nodes than by
default memory policy, reserved hugepages are divided on each node and
again DPDK test app fails for the reason already mentioned. I found
out that 'solution' fo
On Wed, Jan 7, 2015 at 11:59 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:
> On Wed, 7 Jan 2015 14:03:29 +0100
> David Marchand wrote:
>
> > + buf = strdup(devargs_str);
> > + if (buf == NULL) {
> > + RTE_LOG(ERR, EAL, "cannot allocate temp memory for
> devarg
Hi,
> -Original Message-
> From: Ananyev, Konstantin
> Sent: Wednesday, January 7, 2015 8:07 PM
> To: Liu, Jijiang; 'Olivier MATZ'
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 0/3] enhance TX checksum command and
> csum forwarding engine
>
>
>
> > -Original Message-
> -Original Message-
> From: Wu, Jingjing
> Sent: Thursday, January 8, 2015 10:44 AM
> To: Zhang, Helin; dev at dpdk.org
> Cc: Chen, Jing D; Liu, Jijiang; Cao, Waterman; Lu, Patrick; Rowden, Aaron F
> Subject: RE: [PATCH v3] i40e: workaround for X710 performance issues
>
>
>
> > -O
> -Original Message-
> From: Zhang, Helin
> Sent: Tuesday, December 16, 2014 4:23 PM
> To: dev at dpdk.org
> Cc: Chen, Jing D; Wu, Jingjing; Liu, Jijiang; Cao, Waterman; Lu, Patrick;
> Rowden, Aaron F; Zhang, Helin
> Subject: [PATCH v3] i40e: workaround for X710 performance issues
>
> On
24 matches
Mail list logo