Or Gerlitz <gerlitz...@gmail.com> wrote on Sat [2016-Dec-10 05:46:13 -0800]:
> On Fri, Dec 9, 2016 at 12:42 AM, Vatsavayi, Raghu
> <raghu.vatsav...@cavium.com> wrote:
> >> From: Or Gerlitz [mailto:gerlitz...@gmail.com]
> >> On Thu, Dec 8, 2016 at 11:00 PM, Raghu Vatsavayi
> >> <rvatsav...@caviumnetworks.com> wrote:
> 
> >>> Adds VF vxlan offload support.
> 
> >> What's the use case for that? a VM running a VTEP, isn't that part needs to
> >> run @ the host?
> 
> > Our HW can support offloads for VF which is required if we load it on 
> > Hypervisor.
> 
> 
> +       nctrl.ncmd.u64 = 0;
> +       nctrl.ncmd.s.cmd = command;
> +       nctrl.ncmd.s.more = vxlan_cmd_bit;
> +       nctrl.ncmd.s.param1 = vxlan_port;
> +       nctrl.iq_no = lio->linfo.txpciq[0].s.q_no;
> +       nctrl.wait_time = 100;
> +       nctrl.netpndev = (u64)netdev;
> +       nctrl.cb_fn = liquidio_link_ctrl_cmd_completion;
> +
> +       ret = octnet_send_nic_ctrl_pkt(lio->oct_dev, &nctrl);
> 
> 1. What happens if > 1 one VF runs this code, each with different
> port? who wins? is the result well defined?

There is neither race nor contention, but all VFs "win" (meaning they get
what they ask for) because the VxLAN UDP port can be set on a per VF basis.
So the result of the above case is:  for VFs running in the host (not in
VMs), each VF interface is a VTEP with a distinct UDP port for VxLAN.

> 2. does octnet_send_nic_ctrl_pkt() goes to sleep? this is disallowed here

No, it does not go to sleep.

Felix

Reply via email to