> > > > > > > Currently only one qp will be used for one core. The number of > > > > > > > qps can be increased to match the number of lcore params. > > > > > > > > > > > > I don't really understand the purpose of that patch.... > > > > > > As I understand, all it does - unconditionally increases number > > > > > > of crypto-queues mapped to the same lcore. > > > > > > > > > > [Anoob] With the current code, we will have only crypto qp mapped > > > > > to one lcore. So even if you have large number of crypto qps, you > > > > > would be only > > > > using as much as the number of lcores. > > > > > > > > > > This is an inefficient model as a fat flow(say with large packet > > > > > sizes) on one eth queue hitting one core can starve another flow > > > > > which > > > > happens to hit the same core, because both flows would get queued to > > > > the same qp. > > > > > And, we cannot just randomly submit to multiple qps from the same > > > > > core as then the ordering would be messed up. > > > > > > > > No-one suggesting that one I believe. > > > > > > > > So the best possible usage model would be to map one eth queue to > > > > one > > > > > crypto qp. That way, the core wouldn't be unnecessarily pipeline > > > > > the crypto > > > > processing. > > > > > > > > I doubt it is really the 'best possible usage model'. > > > > There could be cases when forcing lcore to use/manage more > > > > crypto-queues will lead to negative effects: perf drop, not enough > > > > crypto queues for all eth queues, etc. > > > > If your HW/SW requires to match each eth queue with a separate > > > > crypto-queue, then I think it should be an optional feature, while > > > > keeping default behavior intact. > > > > > > [Anoob] I think the question here is more to do with s/w crypto PMDs > > > vs h/w crypto PMDs. For s/w PMDs, having more queues doesn't really > > make sense and for h/w PMDs it's better. > > > > Not always. > > If these queues belongs to the same device, sometimes it is faster to use > > just > > one queue for device per core. > > HW descriptor status polling, etc. is not free. > > > > > I'll see how we can make this an optional feature. Would you be okay > > > with allowing such behavior if the underlying PMD can support as many > > queues as lcore_params? > > > > > > As in, if the PMD doesn't support enough queues, we do 1 qp per core. > > Would that work for you? > > > > I am not fond of idea to change default mapping method silently. > > My preference would be a new command-line option (--cdev-maping or so). > > Another thought, make it more fine-grained and user-controlled by > > extending eth-port-queue-lcore mapping option. > > from current: <port>,<queue>,<lcore> > > to <port>,<queue>,<lcore>,<cdev-queue>. > > > > So let say with 2 cores , 2 eth ports/2 queues per port for current mapping > > user would do: > > # use cdev queue 0 on all cdevs for lcore 6 # use cdev queue 1 on all cdevs > > for > > lcore 7 --lcores="6,7" ... -- > > --config="(0,0,6,0),(1,0,6,0),(0,1,7,1),(1,1,7,1)" > > > > for the mapping you'd like to have: > > --lcores="6,7" ... -- --config="(0,0,6,0),(1,0,6,1),(0,1,7,2),(1,1,7,3)" > > [Anoob] I like this idea. This would work for inline case as well. > > @Akhil, do you have any comments? > > Also, I think we should make it <port>,<queue>,<lcore>,<cdev_id>,<cdev-queue> > Looks good to me, but I believe this would need more changes and testing in event patches. Also it does not have any changes for lookaside cases. Can we move this to next release and add lookaside case as well in a single go. We need to close the RC2 on 5th Feb, and I don't want to push this series for RC3 as it is a massive change in ipsec-secgw.
-Akhil