Re: [vpp-dev] 128 byte cache line support

2019-03-20 Thread Lijian Zhang
Add Nitin.
From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Monday, March 18, 2019 7:18 PM
To: Honnappa Nagarahalli 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 128 byte cache line support




On 15 Mar 2019, at 04:52, Honnappa Nagarahalli 
mailto:honnappa.nagaraha...@arm.com>> wrote:



Related to change 18278[1], I was wondering if there is really a benefit of 
dealing with 128-byte cachelines like we do today.
Compiling VPP with cacheline size set to 128 will basically just add 64 bytes 
of unused space at the end of each cacheline so
vlib_buffer_t for example will grow from 128 bytes to 256 bytes, but we will 
still need to prefetch 2 cachelines like we do by default.

Whta will happen if we just leave that to be 64?
[Honnappa] Currently, ThunderX1 and Octeon TX have 128B cache line. What I have 
heard from Marvel folks is 64B cache line setting in DPDK does not work. I have 
not gone into details on what does not work exactly. May be Nitin can elaborate.
I’m curious to hear details…


1. sometimes (and not very frequently) we will issue 2 prefetch instructions 
for same cacheline, but I hope hardware is smart enough to just ignore 2nd one

2. we may face false sharing issues if first 64 bytes is touched by one thread 
and another 64 bytes are touched by another one

Second one sounds to me like a real problem, but it can be solved by aligning 
all per-thread data structures to 2 x cacheline size.
[Honnappa] Sorry, I don’t understand you here. Even if the data structure is 
aligned on 128B (2 X 64B), 2 contiguous blocks of 64B data would be on a single 
cache line.
I wanted to say that we can align all per-thread data structures to 128 bytes, 
even on systems which have 64 byte cacheline size.
Actually If i remember correctly, even on x86 some of hardware prefetchers are 
dealing with blocks of 2 cachelines.

So unless I missed something, my proposal here is, instead of maintaining 
special 128 byte images for some ARM64 machines,
let’s just align all per-thread data structures to 128 and have just one ARM 
image.
[Honnappa] When we run VPP compiled with 128B cache line size on platforms with 
64B cache line size, there is a performance degradation.
Yeah, sure, what I’m suggesting here is how to address that perf degradation.


Hence the proposal is to make sure the distro packages run on all platforms. 
But one can get the best performance when compiled for a particular target.
Thoughts?

--
Damjan


[1] https://gerrit.fd.io/r/#/c/18278/
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12532): https://lists.fd.io/g/vpp-dev/message/12532
Mute This Topic: https://lists.fd.io/mt/30426937/675477
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[honnappa.nagaraha...@arm.com]
-=-=-=-=-=-=-=-=-=-=-=-
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12591): https://lists.fd.io/g/vpp-dev/message/12591
Mute This Topic: https://lists.fd.io/mt/30426937/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp build 19.01.1 IPSec crash

2019-03-20 Thread Jan Gelety via Lists.Fd.Io
Hello Kingwell,

Thanks for the info.

We trigerred three runs of csit-vpp-perf-verify-1901-3n-hsw job with vpp build 
19.01.1-8~g50a392f~b56 (builds 
81, 
82, 
83)
- all tests passed.

Regards,
Jan

From: vpp-dev@lists.fd.io  On Behalf Of Kingwel Xie
Sent: Tuesday, March 19, 2019 2:04 AM
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
; Dave Barach (dbarach) ; 
yuwei1.zh...@intel.com
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp build 19.01.1 IPSec crash

Well, I can not open the xz file. It is always 32B…

Anyway, patch 17889 should be always included if you want to use IPSec 
cryptodev.

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Jan Gelety via 
Lists.Fd.Io
Sent: 2019年3月19日 0:25
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; 
yuwei1.zh...@intel.com
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp build 19.01.1 IPSec crash

+ Simon Zhang and Kingwel Xie

Hello Dave,

I appologize that I haven’t clearly stated that we used 19.01.1-release_amd64 
xenial vpp build for testing.

Anyway, I created jira ticke VPP-1597 to 
cover this issue.

@Kingwel, Simon - could you check if your commits [4] and [5] to 19.01 vpp 
branch could be fix for reported issue?

Thanks,
Jan

[4] 
https://gerrit.fd.io/r/gitweb?p=vpp.git;a=commit;h=50a392f5a0981fb442449864c479511c54145a29
[5] 
https://gerrit.fd.io/r/gitweb?p=vpp.git;a=commit;h=3e59d2eb20e73e0216c12964159c0a708482c548

From: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Sent: Monday, March 18, 2019 2:56 PM
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
mailto:jgel...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: RE: vpp build 19.01.1 IPSec crash

Do these tarballs include the .deb packages which correspond to the coredumps?

Thanks... Dave

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Jan Gelety via 
Lists.Fd.Io
Sent: Monday, March 18, 2019 9:45 AM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp build 19.01.1 IPSec crash

Hello,

During csit tests with vpp build 19.01.1 we found out that the first and 
sometimes also second IPSec scale test is failing because of VPP crash - see 
csit log [0].

Collected core-dump files are availabe here [1] and here [2].

Could you, please, let us know if this issue is already fixed by some of later 
commits to vpp stable/1901 branch [3]?

Thanks,
Jan

PS: Instructions to extract the coredump file:
$ unxz 155255350764.tar.lzo.lrz.xz
$ lrunzip 155255350764.tar.lzo.lrz
$ lzop -d 155255350764.tar.lzo
$ tar xf 155255350764.tar

[0] 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/csit-vpp-perf-verify-1901-3n-hsw/80/archives/log.html.gz#s1-s1-s1-s1-s1-t1
[1] 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/csit-vpp-perf-verify-1901-3n-hsw/80/archives/155255350764.tar.lzo.lrz.xz
[2] 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/csit-vpp-perf-verify-1901-3n-hsw/80/archives/155255351965.tar.lzo.lrz.xz
[3] 
https://gerrit.fd.io/r/gitweb?p=vpp.git;a=shortlog;h=refs%2Fheads%2Fstable%2F1901

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12592): https://lists.fd.io/g/vpp-dev/message/12592
Mute This Topic: https://lists.fd.io/mt/30474111/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Status of PPPoE Plugin

2019-03-20 Thread Raj
I commented out the offending parts and now PPPoE is working fine.

diff --git a/src/vnet/ethernet/node.c b/src/vnet/ethernet/node.c
index 53d5b4eb0..7c5f673e0 100755
--- a/src/vnet/ethernet/node.c
+++ b/src/vnet/ethernet/node.c
@@ -429,14 +429,14 @@ ethernet_input_inline (vlib_main_t * vm,
 }
   else
 {
+  /*if (!ethernet_address_cast (e0->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e0, hi->hw_address))
 error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
   if (!ethernet_address_cast (e1->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e1, hi->hw_address))
+error1 = ETHERNET_ERROR_L3_MAC_MISMATCH;*/
   vlib_buffer_advance (b0, sizeof (ethernet_header_t));
   determine_next_node (em, variant, 0, type0, b0,
&error0, &next0);
@@ -653,10 +653,10 @@ ethernet_input_inline (vlib_main_t * vm,
 }
   else
 {
+  /*if (!ethernet_address_cast (e0->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e0, hi->hw_address))
+error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;*/
   vlib_buffer_advance (b0, sizeof (ethernet_header_t));
   determine_next_node (em, variant, 0, type0, b0,
&error0, &next0);

I am sure this is not a patch which will be accepted upstream. I am
not sure how to _correctly_ fix this, so that it will be accepted by
upstream. If some pointers are given I can submit a patch to get PPPoE
working correctly again in VPP.


Thanks and Regards,

Raj

On Tue, Mar 19, 2019 at 1:09 PM Raj via Lists.Fd.Io
 wrote:
>
> Another approach that can be used is to send all PPP/PPPoE control
> packets encapsulated in NSH, and Control plane can process it and
> return the packets to VPP which will decapsulate the NSH headers and
> pass the packets on to PPPoE client.
>
> Thanks and Regards,
>
> Raj
>
> On Mon, Mar 18, 2019 at 7:44 PM Raj via Lists.Fd.Io
>  wrote:
> >
> > On Thu, Dec 20, 2018 at 6:17 AM Ni, Hongjun  wrote:
> >
> > > For the issue you mentioned, Alp Arslan in the To list will submit a 
> > > patch to fix it.
> >
> > I can take a stab at this and prepare a patch to get PPPoE working
> > again in VPP. I understand that this error happens because VPP will
> > only allow packets whose destination MAC is the same as the interface
> > MAC or bcast/mcast MAC. How should I go about fixing this? Would
> > turning off off this check for TAP interface work? Any better way to
> > achieve this result?
> >
> > Thanks and Regards,
> >
> > Raj
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12588): https://lists.fd.io/g/vpp-dev/message/12588
> Mute This Topic: https://lists.fd.io/mt/28791570/157026
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [rajlistu...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12593): https://lists.fd.io/g/vpp-dev/message/12593
Mute This Topic: https://lists.fd.io/mt/28791570/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Status of PPPoE Plugin

2019-03-20 Thread Ni, Hongjun
Hi Raj,

I think you may check the Ethernet type, if it is PPPoE packet, then MAC check 
will be skipped.

Thanks,
Hongjun

-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Raj
Sent: Wednesday, March 20, 2019 9:48 PM
To: vpp-dev 
Cc: alp.ars...@xflowresearch.com; Ni, Hongjun 
Subject: Re: [vpp-dev] Status of PPPoE Plugin

I commented out the offending parts and now PPPoE is working fine.

diff --git a/src/vnet/ethernet/node.c b/src/vnet/ethernet/node.c index 
53d5b4eb0..7c5f673e0 100755
--- a/src/vnet/ethernet/node.c
+++ b/src/vnet/ethernet/node.c
@@ -429,14 +429,14 @@ ethernet_input_inline (vlib_main_t * vm,
 }
   else
 {
+  /*if (!ethernet_address_cast (e0->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e0, hi->hw_address))
 error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;
   if (!ethernet_address_cast (e1->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e1, hi->hw_address))
+error1 = ETHERNET_ERROR_L3_MAC_MISMATCH;*/
   vlib_buffer_advance (b0, sizeof (ethernet_header_t));
   determine_next_node (em, variant, 0, type0, b0,
&error0, &next0); @@ -653,10 +653,10 @@ 
ethernet_input_inline (vlib_main_t * vm,
 }
   else
 {
+  /*if (!ethernet_address_cast (e0->dst_address) &&
   (hi->hw_address != 0) &&
   !eth_mac_equal ((u8 *) e0, hi->hw_address))
+error0 = ETHERNET_ERROR_L3_MAC_MISMATCH;*/
   vlib_buffer_advance (b0, sizeof (ethernet_header_t));
   determine_next_node (em, variant, 0, type0, b0,
&error0, &next0);

I am sure this is not a patch which will be accepted upstream. I am not sure 
how to _correctly_ fix this, so that it will be accepted by upstream. If some 
pointers are given I can submit a patch to get PPPoE working correctly again in 
VPP.


Thanks and Regards,

Raj

On Tue, Mar 19, 2019 at 1:09 PM Raj via Lists.Fd.Io 
 wrote:
>
> Another approach that can be used is to send all PPP/PPPoE control 
> packets encapsulated in NSH, and Control plane can process it and 
> return the packets to VPP which will decapsulate the NSH headers and 
> pass the packets on to PPPoE client.
>
> Thanks and Regards,
>
> Raj
>
> On Mon, Mar 18, 2019 at 7:44 PM Raj via Lists.Fd.Io 
>  wrote:
> >
> > On Thu, Dec 20, 2018 at 6:17 AM Ni, Hongjun  wrote:
> >
> > > For the issue you mentioned, Alp Arslan in the To list will submit a 
> > > patch to fix it.
> >
> > I can take a stab at this and prepare a patch to get PPPoE working 
> > again in VPP. I understand that this error happens because VPP will 
> > only allow packets whose destination MAC is the same as the 
> > interface MAC or bcast/mcast MAC. How should I go about fixing this? 
> > Would turning off off this check for TAP interface work? Any better 
> > way to achieve this result?
> >
> > Thanks and Regards,
> >
> > Raj
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#12588): 
> https://lists.fd.io/g/vpp-dev/message/12588
> Mute This Topic: https://lists.fd.io/mt/28791570/157026
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
> [rajlistu...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12594): https://lists.fd.io/g/vpp-dev/message/12594
Mute This Topic: https://lists.fd.io/mt/28791570/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXT] Re: [vpp-dev] 128 byte cache line support

2019-03-20 Thread Nitin Saxena
Hi,


First all sorry for responding late to this mail chain. Please see my answers 
inline in blue


Thanks,

Nitin



From: Damjan Marion 
Sent: Monday, March 18, 2019 4:48 PM
To: Honnappa Nagarahalli
Cc: vpp-dev; Nitin Saxena
Subject: [EXT] Re: [vpp-dev] 128 byte cache line support

External Email



On 15 Mar 2019, at 04:52, Honnappa Nagarahalli 
mailto:honnappa.nagaraha...@arm.com>> wrote:



Related to change 18278[1], I was wondering if there is really a benefit of 
dealing with 128-byte cachelines like we do today.
Compiling VPP with cacheline size set to 128 will basically just add 64 bytes 
of unused space at the end of each cacheline so
vlib_buffer_t for example will grow from 128 bytes to 256 bytes, but we will 
still need to prefetch 2 cachelines like we do by default.

[Nitin]: This is the existing model. In case of forwarding mainly first vlib 
cache line size is being used. We are utilising existing hole (in first vlib 
cache line) by putting packet parsing info (Size ==64B). This has many 
benefits, one of them is to avoid ipv4-input-no-chksum() software checks. It 
gives us ~20 cycles benefits on our platform. So I do not want to lose that 
gain.

Whta will happen if we just leave that to be 64?

[Nitin]: This will create L1D holes on 128B targets right? Unutilized holes are 
not acceptable as it will waste L1D space and thereby affecting performance. On 
the contrary we want to pack structures from  2x64B to 1x128B cache line size 
to reduce number of pending prefetches in core pipeline. VPP heavily prefetches 
LOAD/STORE version of 64B and our effort is to reduce them for our target.

[Honnappa] Currently, ThunderX1 and Octeon TX have 128B cache line. What I have 
heard from Marvel folks is 64B cache line setting in DPDK does not work. I have 
not gone into details on what does not work exactly. May be Nitin can elaborate.

I’m curious to hear details…

1. sometimes (and not very frequently) we will issue 2 prefetch instructions 
for same cacheline, but I hope hardware is smart enough to just ignore 2nd one

2. we may face false sharing issues if first 64 bytes is touched by one thread 
and another 64 bytes are touched by another one

Second one sounds to me like a real problem, but it can be solved by aligning 
all per-thread data structures to 2 x cacheline size.

[Honnappa] Sorry, I don’t understand you here. Even if the data structure is 
aligned on 128B (2 X 64B), 2 contiguous blocks of 64B data would be on a single 
cache line.

I wanted to say that we can align all per-thread data structures to 128 bytes, 
even on systems which have 64 byte cacheline size.

Actually If i remember correctly, even on x86 some of hardware prefetchers are 
dealing with blocks of 2 cachelines.

So unless I missed something, my proposal here is, instead of maintaining 
special 128 byte images for some ARM64 machines,
let’s just align all per-thread data structures to 128 and have just one ARM 
image.

[Honnappa] When we run VPP compiled with 128B cache line size on platforms with 
64B cache line size, there is a performance degradation.

Yeah, sure, what I’m suggesting here is how to address that perf degradation.
[Nitin]: Is this proposal for Intel as well? If yes then I am fine with the 
proposal but I think it will decrease performance on 64B architecture with 
existing code.

Hence the proposal is to make sure the distro packages run on all platforms. 
But one can get the best performance when compiled for a particular target.

Thoughts?

--
Damjan


[1] https://gerrit.fd.io/r/#/c/18278/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12532): https://lists.fd.io/g/vpp-dev/message/12532
Mute This Topic: https://lists.fd.io/mt/30426937/675477
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[honnappa.nagaraha...@arm.com]
-=-=-=-=-=-=-=-=-=-=-=-
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12595): https://lists.fd.io/g/vpp-dev/message/12595
Mute This Topic: https://lists.fd.io/mt/30633927/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-