On Mon, 2021-10-18 at 09:46 +0300, Andrew Rybchenko wrote:
> On 10/15/21 1:54 PM, Xueming(Steven) Li wrote:
> > On Fri, 2021-10-15 at 12:28 +0300, Andrew Rybchenko wrote:
> > > On 10/12/21 5:39 PM, Xueming Li wrote:
> > > > In current DPDK framework, each Rx queue is pre-loaded with mbufs to
> > > > save incoming packets. For some PMDs, when number of representors scale
> > > > out in a switch domain, the memory consumption became significant.
> > > > Polling all ports also leads to high cache miss, high latency and low
> > > > throughput.
> > > > 
> > > > This patch introduce shared Rx queue. Ports in same Rx domain and
> > > > switch domain could share Rx queue set by specifying non-zero sharing
> > > > group in Rx queue configuration.
> > > > 
> > > > No special API is defined to receive packets from shared Rx queue.
> > > > Polling any member port of a shared Rx queue receives packets of that
> > > > queue for all member ports, source port is identified by mbuf->port.
> > > > 
> > > > Shared Rx queue must be polled in same thread or core, polling a queue
> > > > ID of any member port is essentially same.
> > > > 
> > > > Multiple share groups are supported by non-zero share group ID. Device
> > > 
> > > "by non-zero share group ID" is not required. Since it must be
> > > always non-zero to enable sharing.
> > > 
> > > > should support mixed configuration by allowing multiple share
> > > > groups and non-shared Rx queue.
> > > > 
> > > > Even Rx queue shared, queue configuration like offloads and RSS should
> > > > not be impacted.
> > > 
> > > I don't understand above sentence.
> > > Even when Rx queues are shared, queue configuration like
> > > offloads and RSS may differ. If a PMD has some limitation,
> > > it should care about consistency itself. These limitations
> > > should be documented in the PMD documentation.
> > > 
> > 
> > OK, I'll remove this line.
> > 
> > > > 
> > > > Example grouping and polling model to reflect service priority:
> > > >  Group1, 2 shared Rx queues per port: PF, rep0, rep1
> > > >  Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
> > > >  Core0: poll PF queue0
> > > >  Core1: poll PF queue1
> > > >  Core2: poll rep2 queue0
> > > 
> > > 
> > > Can I have:
> > > PF RxQ#0, RxQ#1
> > > Rep0 RxQ#0 shared with PF RxQ#0
> > > Rep1 RxQ#0 shared with PF RxQ#1
> > > 
> > > I guess no, since it looks like RxQ ID must be equal.
> > > Or am I missing something? Otherwise grouping rules
> > > are not obvious to me. May be we need dedicated
> > > shared_qid in boundaries of the share_group?
> > 
> > Yes, RxQ ID must be equal, following configuration should work:
> >   Rep1 RxQ#1 shared with PF RxQ#1
> 
> But I want just one RxQ on Rep1. I don't need two.
> 
> > Equal mapping should work by default instead of a new field that must
> > be set. I'll add some description to emphasis, how do you think?
> 
> Sorry for delay with reply. I think that above limitation is
> not nice. It is better to avoid it.

Okay, it will offer more flexibility. I will add in next version. User
has to be aware the indirect mapping relation, the following polling in
above example rx burt the same shared RxQ:
  PF RxQ#1
  Rep1 RxQ#0 shared with PF RxQ#1

> 
> [snip]

Reply via email to