[dpdk-dev] simplest way to prioritize one of 128 Tx queues

2016-06-24 Thread Lavanya Jose
Hi everyone,

I'm trying to implement a scheme that dedicates a high priority Tx
queue for low-latency control traffic and up to 100 other rate-limited
Tx queues for lower priority data traffic. There are only two
priorities involved here. What would be the simplest setting on the Tx
side?

 Do I need to use the DCB or VT modes or can I simply go with DCB off,
VT off but set the RTTDT1C register (credits to refill for queue after
each round by arbiter?) for the high priority queue to a large number
of credits relative to other queues?

I want to be fair across all the 100 lower priority queues and I was
worried that putting them in different traffic classes will prioritize
one queue over another.

Let me know if you have any thoughts, I'm just getting started with DPDK.

Thank you,
Lavanya


[dpdk-dev] Configuring NIC Tx arbiters in VMDK off, DCB off mode

2016-07-23 Thread Lavanya Jose
Hi everyone,

I found a snippet

of
code from a userspace driver that lets you configure weights for the
hardware NIC tx queues by configuring the RTTDT1C register in Intel 82599.
It looks like this is typically used for rate limiting VM traffic, with the
NIC in VMDQ mode. I was wondering whether I can do this without setting up
the NIC in VMDQ or DCB mode?

I am directly enqueuing CustomProto/UDP/IP/Ethernet packets from a single
process (two threads) on to specific hardware queues (without an
intermediate DPDK QOS/ Hierarchical Scheduler layer) and updating the
hardware queue rates, so the priority/ hardware rate limiting are the only
features I need (not VMDQ, DCB). Can I go about just changing the weights
as in the code above with VMDQ and DCB turned off?

Some initial experiments I did suggest that setting just the RTTDT1C
register for queues doesn't make any difference to the relative throughput
of the queues..

Thanks,
Lavanya


[dpdk-dev] rte_eth_rx bug? duplicate message bufs

2016-08-08 Thread Lavanya Jose
Hi,

I was wondering if anyone on this list has come across this problem of
rte_eth_rx_burst returning the same mbuf contents multiple times especially
during congestion. I notice this problem after some number of calls to
rte_eth_rx_burst when I set the nb_pkts argument to anything more than 5. I
did confirm that the contents (random payloads) in the duplicate packets
are identical.

I looked at the corresponding ixgbe driver code that gets packets from the
rx ring.

It looks like the driver doesn't drop packets if an mbuf allocation fails.
I'm not sure if this is the root cause of the bug I'm seeing?

I'm also curious about whether I need to set rx_descs and tx_descs to 40
when I'm setting up the Intel 82599 device? The datasheet says there's 40
descriptors per TX queue though default values I've seen in code are much
larger..

Thanks,
Lavanya


[dpdk-dev] rte_eth_rx bug? duplicate message bufs

2016-08-11 Thread Lavanya Jose
Hi Avinash,

For me it turned out to be a bug with duplicate detection code rather than
with DPDK. The sequence numbers were wrapping over and I didn't have enough
random bits for the payload either.

- Lavanya

On Thu, Aug 11, 2016 at 4:53 PM Yeddula, Avinash  wrote:

> Hi All,
> I do have a similar issue, any response to the below email might help me
> as well.
>
> Thanks
> -Avinash
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Lavanya Jose
> Sent: Monday, August 08, 2016 11:44 AM
> To: users at dpdk.org; dev at dpdk.org
> Subject: [dpdk-dev] rte_eth_rx bug? duplicate message bufs
>
> Hi,
>
> I was wondering if anyone on this list has come across this problem of
> rte_eth_rx_burst returning the same mbuf contents multiple times especially
> during congestion. I notice this problem after some number of calls to
> rte_eth_rx_burst when I set the nb_pkts argument to anything more than 5. I
> did confirm that the contents (random payloads) in the duplicate packets
> are identical.
>
> I looked at the corresponding ixgbe driver code that gets packets from the
> rx ring.
> <
> https://github.com/emmericp/dpdk/blob/e5b112e4c7a4d63f3131294e9611e4a892b75008/drivers/net/ixgbe/ixgbe_rxtx.c#L1595
> >
> It looks like the driver doesn't drop packets if an mbuf allocation fails.
> I'm not sure if this is the root cause of the bug I'm seeing?
>
> I'm also curious about whether I need to set rx_descs and tx_descs to 40
> when I'm setting up the Intel 82599 device? The datasheet says there's 40
> descriptors per TX queue though default values I've seen in code are much
> larger..
>
> Thanks,
> Lavanya
>