Re: [dpdk-dev] [PATCH v3] net/iavf: fix virtual channel return value error

2020-01-04 Thread Ye Xiaolong
On 12/24, Zhang, Qi Z wrote:
>
>
>> -Original Message-
>> From: Cao, Yahui 
>> Sent: Tuesday, December 24, 2019 12:13 PM
>> To: Wu, Jingjing ; Lu, Wenzhuo
>> 
>> Cc: dev@dpdk.org; sta...@dpdk.org; Zhang, Qi Z ; Cao,
>> Yahui ; Ye, Xiaolong ; Su,
>> Simei 
>> Subject: [PATCH v3] net/iavf: fix virtual channel return value error
>> 
>> In iavf_handle_virtchnl_msg(), it is not appropriate for _clear_cmd() to be 
>> used
>> as a notification to forground thread. So introduce
>> _notify_cmd() to fix this error. In addition, since _notify_cmd() contains
>> rte_wmb(), rte_compiler_barrier() is not necessary.
>> 
>> Sending msg from VF to PF is mainly by calling iavf_execute_vf_cmd(), the
>> whole virtchnl msg process is like,
>> 
>> iavf_execute_vf_cmd() will call iavf_aq_send_msg_to_pf() to send msg and
>> then polling the cmd done flag as "if (vf->pend_cmd ==
>> VIRTCHNL_OP_UNKNOWN)"
>> 
>> When reply msg is returned by pf, iavf_handle_virtchnl_msg() in isr will read
>> msg return value by "vf->cmd_retval = msg_ret" and immediately set the cmd
>> done flag by calling _clear_cmd() to notify the iavf_execute_vf_cmd().
>> 
>> iavf_execute_vf_cmd() find the cmd done flag is set and then check whether
>> command return value vf->cmd_retval is success or not.
>> 
>> However _clear_cmd() also resets the vf->cmd_retval to success, overwriting
>> the actual return value which is used for diagnosis.
>> So iavf_execute_vf_cmd() will always find vf->cmd_retval is success and then
>> return success.
>> 
>> Fixes: 22b123a36d07 ("net/avf: initialize PMD")
>> Cc: sta...@dpdk.org
>> 
>> Signed-off-by: Yahui Cao 
>
>Acked-by: Qi Zhang 
>

Acked-by: Xiaolong Ye 

Applied to dpdk-next-net-intel, Thanks.


Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker

2020-01-04 Thread Anoob Joseph
Hi Konstantin,

Please see inline.

Thanks,
Anoob

> -Original Message-
> From: Ananyev, Konstantin 
> Sent: Monday, December 23, 2019 10:58 PM
> To: Anoob Joseph ; Akhil Goyal
> ; Nicolau, Radu ; Thomas
> Monjalon 
> Cc: Ankur Dwivedi ; Jerin Jacob Kollanukkaran
> ; Narayana Prasad Raju Athreya
> ; Archana Muniganti ;
> Tejasree Kondoj ; Vamsi Krishna Attunuru
> ; Lukas Bartosik ;
> dev@dpdk.org
> Subject: [EXT] RE: [PATCH 12/14] examples/ipsec-secgw: add driver
> outbound worker
> 
> External Email
> 
> --
> 
> > This patch adds the driver outbound worker thread for ipsec-secgw.
> > In this mode the security session is a fixed one and sa update is not
> > done.
> >
> > Signed-off-by: Ankur Dwivedi 
> > Signed-off-by: Anoob Joseph 
> > Signed-off-by: Lukasz Bartosik 
> > ---
> >  examples/ipsec-secgw/ipsec-secgw.c  | 12 +
> >  examples/ipsec-secgw/ipsec.c|  9 
> >  examples/ipsec-secgw/ipsec_worker.c | 90
> > -
> >  3 files changed, 110 insertions(+), 1 deletion(-)
> >
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c
> > b/examples/ipsec-secgw/ipsec-secgw.c
> > index 2e7d4d8..76719f2 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > @@ -2011,6 +2011,18 @@ cryptodevs_init(void)
> > i++;
> > }
> >
> > +   /*
> > +* Set the queue pair to at least the number of ethernet
> > +* devices for inline outbound.
> > +*/
> > +   qp = RTE_MAX(rte_eth_dev_count_avail(), qp);
> 
> 
> Not sure, what for?
> Why we can't process packets from several eth devs on the same crypto-dev
> queue?

[Anoob] This is because of a limitation in our hardware. In our hardware, it's 
the crypto queue pair which would be submitting to the ethernet queue for Tx. 
But in DPDK spec, the security processing is done by the ethernet PMD Tx 
routine alone. We manage to do this by sharing the crypto queue internally. The 
crypto queues initialized during crypto_configure() gets mapped to various 
ethernet ports. Because of this, we need to have atleast as many crypto queues 
as the number of eth ports.

The above change is required because here we limit the number of crypto qps 
based on the number of cores etc. So when tried on single core, the qps get 
limited to 1, which causes session_create() to fail for all ports other than 
the first one.

> 
> > +
> > +   /*
> > +* The requested number of queues should never exceed
> > +* the max available
> > +*/
> > +   qp = RTE_MIN(qp, max_nb_qps);
> > +
> > if (qp == 0)
> > continue;
> >
> > diff --git a/examples/ipsec-secgw/ipsec.c
> > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644
> > --- a/examples/ipsec-secgw/ipsec.c
> > +++ b/examples/ipsec-secgw/ipsec.c
> > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx
> *ipsec_ctx, struct ipsec_sa *sa,
> > return 0;
> >  }
> >
> > +uint16_t sa_no;
> > +#define MAX_FIXED_SESSIONS 10
> > +struct rte_security_session *sec_session_fixed[MAX_FIXED_SESSIONS];
> > +
> >  int
> >  create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
> > struct rte_ipsec_session *ips)
> > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx *skt_ctx,
> > struct ipsec_sa *sa,
> >
> > ips->security.ol_flags = sec_cap->ol_flags;
> > ips->security.ctx = sec_ctx;
> > +   if (sa_no < MAX_FIXED_SESSIONS) {
> > +   sec_session_fixed[sa_no] =
> > +   ipsec_get_primary_session(sa)-
> >security.ses;
> > +   sa_no++;
> > +   }
> > }
> 
> Totally lost what is the purpose of these changes...
> Why first 10 inline-proto are special and need to be saved inside global array
> (sec_session_fixed)?
> Why later, in ipsec_worker.c this array is referenced by eth port_id?
> What would happen if number of inline-proto sessions is less than number of
> eth ports?

[Anoob] This is required for the outbound driver mode. The 'driver mode' is 
more like 'single_sa' mode of the existing application. The idea is to skip all 
the lookups etc done in the s/w and perform ipsec processing fully in h/w. In 
outbound, following is roughly what we should do for driver mode,

pkt = rx_burst();

/* set_pkt_metadata() */
pkt-> udata64 = session;

tx_burst(pkt);

The session is created on eth ports. And so, if we have single SA, then the 
entire traffic will have to be forwarded on the same port. The above change is 
to make sure we could send traffic on all ports.

Currently we just use the first 10 SAs and save it in the array. So the user 
has to set the conf properly and make sure the SAs are distributed such. Will 
update this to save the first parsed outbound SA for a port in the array. That 
way the size of