Re: [vpp-dev] VPLS VPWS problem

2017-06-28 Thread Neale Ranns (nranns)

Hi Xyxue,

I plan to get it committed into master, and so released in the next cycle; 
17.10.

Regards,
neale

From: 薛欣颖 
Date: Wednesday, 28 June 2017 at 02:06
To: "Neale Ranns (nranns)" , vpp-dev 
Subject: Re: Re: VPLS VPWS problem

Hi Neale,

About https://gerrit.fd.io/r/#/c/6861/. What will you do with it later?

Thanks,
Xyxue




From: xyxue
Date: 2017-06-26 19:32
To: nranns; vpp-dev
Subject: Re: Re: VPLS VPWS problem
Hi Neale,

I got the code of master branch. It's a bit different from the patch you 
uploaded earlier.
Does the code of master branch support VPWS and VPLS?

Thanks,
Xyxue



From: Neale Ranns (nranns)
Date: 2017-05-31 21:02
To: 薛欣颖; vpp-dev
Subject: Re: VPLS VPWS problem
Hi Xyxue,

Works for me with:
DBGvpp# sh ver
vpp v17.07-rc0~312-g2062ca5 built by vagrant on localhost at Wed May 31 
03:29:47 PDT 2017

configs and setup attached.

Regards,
neale

From: 薛欣颖 
Date: Wednesday, 31 May 2017 at 10:08
To: "Neale Ranns (nranns)" , vpp-dev 
Subject: VPLS VPWS problem

Hi Neale,

We tested VPLS and VPWS ,the result :In VPLS  the flow was dropped in L2.
In VPWS unable to send and receive messages continuously.Sometimes there is no 
flow receive/send.


MPLS L2VPN VPLS
The configuration and trace  is shown below:
   
 PE1
create host-interface name eth0
create host-interface name eth1
set int state host-eth1 up
set int state host-eth0 up
set interface mac address host-eth0 00:03:7F:FF:FF:FF
set interface mac address host-eth1 00:03:7F:FF:FF:FE
set int ip address host-eth1 2.1.1.1/24
set interface mpls  host-eth1   enable
mpls tunnel l2-only via 2.1.1.2 host-eth1 out-label 34  out-label 33
set int state mpls-tunnel0 up
set interface l2 bridge  mpls-tunnel0 1
set interface l2 bridge host-eth0  1
mpls local-label add eos 1023 l2-input-on mpls-tunnel0
mpls local-label add non-eos 1024 mpls-lookup-in-table 0

   
 PE2
create host-interface name eth0
create host-interface name eth1
set int state host-eth1 up
set int state host-eth0 up
set interface mac address host-eth0 00:50:43:00:02:02
set interface mac address host-eth1 0E:1A:0D:00:50:43
set int ip address host-eth1 2.1.1.2/24
set interface mpls  host-eth1   enable
mpls tunnel l2-only via 2.1.1.1 host-eth1 out-label 1024 out-label 1023
set int state mpls-tunnel0 up
set interface l2 bridge mpls-tunnel0 1
set interface l2 bridge host-eth0  1
mpls local-label add eos 33 l2-input-on mpls-tunnel0
mpls local-label add non-eos 34 mpls-lookup-in-table 0



Is the configuration correctly?

DBGvpp# show trace
--- Start of thread 0 vpp_main ---
Packet 1

00:04:36:289191: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x9 len 76 snaplen 76 mac 66 net 80
  sec 0x2736 nsec 0xb5544e1 vlan 0
00:04:36:291446: ethernet-input
  IP4: 00:50:43:00:02:02 -> 2c:53:4a:02:91:db
00:04:36:291525: l2-input
  l2-input: sw_if_index 1 dst 2c:53:4a:02:91:db src 00:50:43:00:02:02
00:04:36:291582: l2-learn
  l2-learn: sw_if_index 1 dst 2c:53:4a:02:91:db src 00:50:43:00:02:02 bd_index 1
00:04:36:291643: l2-fwd
  l2-fwd:   sw_if_index 1 dst 2c:53:4a:02:91:db src 00:50:43:00:02:02 bd_index 1
00:04:36:291673: error-drop
  l2-fwd: Reflection Drop



VPWS
The configuration and trace is shown below:
VPP1
create host-interface name eth0
create host-interface name eth1
set int state host-eth1 up
set int state host-eth0 up
set interface mac address host-eth0 00:03:7F:FF:FF:FF
set interface mac address host-eth1 00:03:7F:FF:FF:FE
set int ip address host-eth1 2.1.1.1/24
set interface mpls  host-eth1   enable
mpls tunnel l2-only via 2.1.1.2 host-eth1 out-label 34  out-label 33
set int state mpls-tunnel0 up
set interface l2 xconnect   host-eth0  mpls-tunnel0
set interface l2 xconnectmpls-tunnel0  host-eth0
mpls local-label add eos 1023 l2-input-on mpls-tunnel0
mpls local-label add non-eos 1024 mpls-lookup-in-table 0

   
VPP2
create host-interface name eth0
create host-interface name eth1
set int state host-eth1 up
set int state host-eth0 up
set interface mac address host-eth0 00:50:43:00:02:02
set interface mac address host-eth1 0E:1A:0D:00:50:43
set int ip address host-eth1 2.1.1.2/24
set interface mpls  host-eth1   enable
mpls tunnel l2-only via 2.1.1.1 host-eth1 out-label 1024 out-label 1023
set int state mpls-tunnel0 up
set interface l2 xconnect  host-eth0  mpls-tunnel0
set interface l2 xconnectmpls-tunnel0  host-eth0
mpls local-label add eos 33 l2-input-on mpls-tunnel0
mpls local-label add non-eos 34 mpls-lookup-in-table 0

show trace
VPP1
Packet 3

00:03:05:985952: af-packet-input
  af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
  status 0x1 len 124 snaplen 124 mac 66 n

[vpp-dev] fatal error: rte_config.h: No such file or directory

2017-06-28 Thread Samuel S
i need to include dpdk.h from  plugins/dpdk/device/
but when i include this header compiler give this error:
fatal error: rte_config.h: No such file or directory
#include 

who can i fix this probleam?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-28 Thread Maciek Konstantynowicz via RT
Anton and Team,

The continued interruptions of IP connectivity to/from VIRL server simulations 
on the management subnet have been impacting both CSIT and VPP project 
operations. We decided to temporarily remove VPP VIRL based verify jobs, 
job/vpp-csit-verify-virl-master/, from both per vpp patch auto-trigger and the 
voting rights - Ed W. was kind to prepared required ci-mgmt patches, but they 
are not merged yet (https://gerrit.fd.io/r/#/c/7319/, 
https://gerrit.fd.io/r/#/c/7320/).

Before we proceed with above step, we want to do one more set of network infra 
focused tests per yesterday exchange on #fdio-infra irc with Vanessa/valderrv, 
Ed Kern/snergster and Mohammed/mnaser. Here quick recap:

Connectivity is affected between following the mgmt subnets added few weeks 
back as part of  [FD.io Helpdesk #40733]:
10.30.52.0/24
10.30.53.0/24
10.30.54.0/24

The high packet drop rate (50..70%) problem seem to occur sporadically, but if 
packets are passing thru the default gateway router that has address .1 in each 
of above subnets. This affects all connectivity to jenkins slaves, but also 
between tb4 virl hosts. The problem is never observed if packets are sent 
directly between the hosts, it works fine.

Test proposal:

Configure the router that acts as default gateway for these subnet with the 
following static routes:
10.30.52.0/24 at 10.30.51.28 // tb-4virl1 mgmt addr
10.30.53.0/24 at 10.30.51.29 // tb-4virl1 mgmt addr
10.30.54.0/24 at 10.30.51.30 // tb-4virl1 mgmt addr
Meaning all packets to above subnets will be routed through the main 
management IP address on respective tb4-virl host, per wiki [1].
This will remove default gateway router from the problem domain under 
investigation.

Remove following IP addresses from the default gateway router:
10.30.52.1/24
10.30.53.1/24
10.30.54.1/24

Continue to advertise below routes into WAN to ensure reachability from Jenkins 
slave and LF FD.io infra:
10.30.52.0/24
10.30.53.0/24
10.30.54.0/24

Could you pls advise when can these be conducted?

-Maciek

[1] 
https://wiki.fd.io/view/CSIT/CSIT_LF_testbed#Management_VLAN_IP_Addresses_allocation

> On 21 Jun 2017, at 16:00, Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at 
> Cisco)  wrote:
> 
> Hello Anton,
> 
> We did some checks and here are results:
> 
> 1. ping simulated node from the host itself - ping is OK
> 
> 2. ping simulated node from other host (i.e. node simulated on virl2, 
> executing ping command on virl3) - discovered packet loss (see e-mail from 
> Peter below)
> - even for successful ping packet transition we can see the wide range of 
> time - from cca 0,6ms to 45ms...
> 
> We are still investigating VIRL settings but do you have some hints for us?
> 
> Thanks,
> Jan
> 
> -Original Message-
> From: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
> Sent: Wednesday, June 21, 2017 15:20
> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Subject: RE: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones 
> between jenkins executor and VIRL servers
> 
> virl@t4-virl3:/home/testuser$ ping 10.30.51.127
> PING 10.30.51.127 (10.30.51.127) 56(84) bytes of data.
> 64 bytes from 10.30.51.127: icmp_seq=54 ttl=64 time=1.86 ms
> ...
> ^C
> --- 10.30.51.127 ping statistics ---
> 1202 packets transmitted, 193 received, 83% packet loss, time 1202345ms
> rtt min/avg/max/mdev = 0.369/0.736/3.271/0.509 ms
> virl@t4-virl3:/home/testuser$ ping 10.30.51.29
> 
> Peter Mikus
> Engineer - Software
> Cisco Systems Limited
> 
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of Jan Gelety -X via RT
> Sent: Tuesday, June 20, 2017 5:20 PM
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones 
> between jenkins executor and VIRL servers
> 
> Hello Anton,
> 
> Thanks for the fast response. We will check local firewall setting as you 
> proposed.
> 
> Regards,
> Jan
> 
> -Original Message-
> From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
> Sent: Tuesday, June 20, 2017 17:13
> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
> 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [FD.io Helpdesk #41921] connection interruptiones between jenkins 
> executor and VIRL servers
> 
> Jan: 
> 
> This is what I got  from fdio jenkins server (i did the tests with 
> 10.30.{52,53}.2 hosts: 
> 
> $ ip ro get 10.30.52.2
> 10.30.52.2 via 10.30.48.1 dev eth0  src 10.30.48.5
>cache
> 
> The traffic is going directly through neutron router.. so we don't block any 
> traffic on our firewall
> 
> $ ping -q -c4 10.30.52.2
> PING 10.30.52.2 (10.30.52.2) 56(84) bytes of data.
> 
> --- 10.30.52.2 ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 3001ms rtt 
> min/avg/max/mdev = 0.496/0.789/1.509/0.419 ms
> 
> I was able to reac

Re: [vpp-dev] Building on Fedora 24

2017-06-28 Thread Tomas Brännström
We targeted CentOS instead and there it works fine to build and install.

However, there's some other issue now. When I start the VPP service, it
looks like it immediately dies:

$ sudo service vpp start
Redirecting to /bin/systemctl start  vpp.service
$ sudo service vpp status
Redirecting to /bin/systemctl status  vpp.service
● vpp.service - Vector Packet Processing Process
   Loaded: loaded (/usr/lib/systemd/system/vpp.service; disabled; vendor
preset: disabled)
   Active: inactive (dead)

This error message is printed:
vpp[5859]: clib_socket_init: bind: No such file or directory

This is in a Virtualbox VM. Not sure how to proceed since usually a better
error message is printed when it fails (for example missing drivers and
whatnot).

/Tomas

On 27 June 2017 at 21:17, Klement Sekera -X (ksekera - PANTHEON
TECHNOLOGIES at Cisco)  wrote:

> Unfortunately, I'm no rpm packaging expert, somebody else will have to
> chime in...
>
> Thanks,
> Klement
>
> Quoting Tomas Brännström (2017-06-27 17:18:40)
> >I got the same error :-(
> >make bootstrap and build works, but building rpg packages fail...
> >/Tomas
> >On 27 June 2017 at 16:25, Klement Sekera -X (ksekera - PANTHEON
> >TECHNOLOGIES at Cisco) <[1]ksek...@cisco.com> wrote:
> >
> >  Hi Tomas,
> >
> >  could you please take a look at the main Makefile:
> >
> >  57 ifeq ($(OS_ID)-$(OS_VERSION_ID),fedora-25)
> >  58 RPM_DEPENDS += python2-virtualenv
> >  59 RPM_DEPENDS_GROUPS = 'C Development Tools and Libraries'
> >  60 else
> >  61 RPM_DEPENDS += python-virtualenv
> >  62 RPM_DEPENDS_GROUPS = 'Development Tools'
> >  63 endif
> >
> >  see how the fedora-25 is the version which uses python2-virtualenv
> >  while all others use python-virtualenv? Could you please change
> >  fedora-25
> >  to fedora-24 on line 57 and let us know if this smooths things out?
> >  Maybe the fix is a simple one-liner...
> >
> >  Thanks,
> >  Klement
> >
> >  Quoting Tomas Brännström (2017-06-27 16:08:31)
> >  >Hello
> >  >I'm having some troubles building VPP (latest master) from
> source
> >  on
> >  >Fedora 24.
> >  >At first when doing `make bootstrap' it complained about not
> >  finding
> >  >python-virtualenv. I could get around this by changing
> changing the
> >  >Makefile to look for "python2-virtualenv" which was the version
> >  that got
> >  >installed.
> >  >But when doing `make pgk-rpm' I get the following errors:
> >  >make[2]: Entering directory
> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
> >  >Please install missing RPMs: \npackage python-virtualenv is not
> >  >installed\n
> >  >by executing "make install-dep"\n
> >  >Makefile:175: recipe for target
> >  >
> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.
> bootstrap.ok'
> >  >failed
> >  >make[2]: ***
> >  >
> >  [/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.
> bootstrap.ok]
> >  Error
> >  >1
> >  >make[2]: Leaving directory
> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
> >  >error: Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
> >  >RPM build errors:
> >  >Macro %python2_minor_version defined but not used within
> scope
> >  >Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
> >  >Makefile:22: recipe for target 'all' failed
> >  >make[1]: *** [all] Error 1
> >  >make[1]: Leaving directory '/home/fedora/git/vpp/extras/rpm'
> >  >Makefile:397: recipe for target 'pkg-rpm' failed
> >  >make: *** [pkg-rpm] Error 2
> >  >I tried changing to python2-virtualenv in that Makefile as
> well but
> >  it
> >  >seems to change back into python-virtualenv, and besides, there
> >  seems to
> >  > be other problems here as well.
> >  >Is there a workaround for this or is  Fedora 24 simply not
> >  supported?
> >  >/Tomas
> >
> > References
> >
> >Visible links
> >1. mailto:ksek...@cisco.com
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Building on Fedora 24

2017-06-28 Thread Tomas Brännström
Sorry for spamming, but using strace I noticed that vppctl tried to connect
to a socket file in */run/vpp*

This folder didn't exist, but when I created it, vpp could start
successfully. There's no such folder in my Ubuntu install, so is this
exclusive to RHEL based distros?

/Tomas

On 28 June 2017 at 13:21, Tomas Brännström 
wrote:

> We targeted CentOS instead and there it works fine to build and install.
>
> However, there's some other issue now. When I start the VPP service, it
> looks like it immediately dies:
>
> $ sudo service vpp start
> Redirecting to /bin/systemctl start  vpp.service
> $ sudo service vpp status
> Redirecting to /bin/systemctl status  vpp.service
> ● vpp.service - Vector Packet Processing Process
>Loaded: loaded (/usr/lib/systemd/system/vpp.service; disabled; vendor
> preset: disabled)
>Active: inactive (dead)
>
> This error message is printed:
> vpp[5859]: clib_socket_init: bind: No such file or directory
>
> This is in a Virtualbox VM. Not sure how to proceed since usually a better
> error message is printed when it fails (for example missing drivers and
> whatnot).
>
> /Tomas
>
> On 27 June 2017 at 21:17, Klement Sekera -X (ksekera - PANTHEON
> TECHNOLOGIES at Cisco)  wrote:
>
>> Unfortunately, I'm no rpm packaging expert, somebody else will have to
>> chime in...
>>
>> Thanks,
>> Klement
>>
>> Quoting Tomas Brännström (2017-06-27 17:18:40)
>> >I got the same error :-(
>> >make bootstrap and build works, but building rpg packages fail...
>> >/Tomas
>> >On 27 June 2017 at 16:25, Klement Sekera -X (ksekera - PANTHEON
>> >TECHNOLOGIES at Cisco) <[1]ksek...@cisco.com> wrote:
>> >
>> >  Hi Tomas,
>> >
>> >  could you please take a look at the main Makefile:
>> >
>> >  57 ifeq ($(OS_ID)-$(OS_VERSION_ID),fedora-25)
>> >  58 RPM_DEPENDS += python2-virtualenv
>> >  59 RPM_DEPENDS_GROUPS = 'C Development Tools and Libraries'
>> >  60 else
>> >  61 RPM_DEPENDS += python-virtualenv
>> >  62 RPM_DEPENDS_GROUPS = 'Development Tools'
>> >  63 endif
>> >
>> >  see how the fedora-25 is the version which uses python2-virtualenv
>> >  while all others use python-virtualenv? Could you please change
>> >  fedora-25
>> >  to fedora-24 on line 57 and let us know if this smooths things out?
>> >  Maybe the fix is a simple one-liner...
>> >
>> >  Thanks,
>> >  Klement
>> >
>> >  Quoting Tomas Brännström (2017-06-27 16:08:31)
>> >  >Hello
>> >  >I'm having some troubles building VPP (latest master) from
>> source
>> >  on
>> >  >Fedora 24.
>> >  >At first when doing `make bootstrap' it complained about not
>> >  finding
>> >  >python-virtualenv. I could get around this by changing
>> changing the
>> >  >Makefile to look for "python2-virtualenv" which was the
>> version
>> >  that got
>> >  >installed.
>> >  >But when doing `make pgk-rpm' I get the following errors:
>> >  >make[2]: Entering directory
>> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
>> >  >Please install missing RPMs: \npackage python-virtualenv is
>> not
>> >  >installed\n
>> >  >by executing "make install-dep"\n
>> >  >Makefile:175: recipe for target
>> >  >
>> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.boots
>> trap.ok'
>> >  >failed
>> >  >make[2]: ***
>> >  >
>> >  [/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.boots
>> trap.ok]
>> >  Error
>> >  >1
>> >  >make[2]: Leaving directory
>> >  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
>> >  >error: Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
>> >  >RPM build errors:
>> >  >Macro %python2_minor_version defined but not used within
>> scope
>> >  >Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
>> >  >Makefile:22: recipe for target 'all' failed
>> >  >make[1]: *** [all] Error 1
>> >  >make[1]: Leaving directory '/home/fedora/git/vpp/extras/rpm'
>> >  >Makefile:397: recipe for target 'pkg-rpm' failed
>> >  >make: *** [pkg-rpm] Error 2
>> >  >I tried changing to python2-virtualenv in that Makefile as
>> well but
>> >  it
>> >  >seems to change back into python-virtualenv, and besides,
>> there
>> >  seems to
>> >  > be other problems here as well.
>> >  >Is there a workaround for this or is  Fedora 24 simply not
>> >  supported?
>> >  >/Tomas
>> >
>> > References
>> >
>> >Visible links
>> >1. mailto:ksek...@cisco.com
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-28 Thread Ed Kern (ejk) via RT

Hey maciek,

We don’t need and would prefer NOT to remove that .1 addresses on those virtual 
routers.

Just the addition of the static is whats needed right now..

thanks,

Ed



> On Jun 28, 2017, at 5:07 AM, Maciek Konstantynowicz (mkonstan) 
>  wrote:
> 
> Anton and Team,
> 
> The continued interruptions of IP connectivity to/from VIRL server 
> simulations on the management subnet have been impacting both CSIT and VPP 
> project operations. We decided to temporarily remove VPP VIRL based verify 
> jobs, job/vpp-csit-verify-virl-master/, from both per vpp patch auto-trigger 
> and the voting rights - Ed W. was kind to prepared required ci-mgmt patches, 
> but they are not merged yet (https://gerrit.fd.io/r/#/c/7319/, 
> https://gerrit.fd.io/r/#/c/7320/).
> 
> Before we proceed with above step, we want to do one more set of network 
> infra focused tests per yesterday exchange on #fdio-infra irc with 
> Vanessa/valderrv, Ed Kern/snergster and Mohammed/mnaser. Here quick recap:
> 
> Connectivity is affected between following the mgmt subnets added few weeks 
> back as part of  [FD.io Helpdesk #40733]:
>10.30.52.0/24
>10.30.53.0/24
>10.30.54.0/24
> 
> The high packet drop rate (50..70%) problem seem to occur sporadically, but 
> if packets are passing thru the default gateway router that has address .1 in 
> each of above subnets. This affects all connectivity to jenkins slaves, but 
> also between tb4 virl hosts. The problem is never observed if packets are 
> sent directly between the hosts, it works fine.
> 
> Test proposal:
> 
> Configure the router that acts as default gateway for these subnet with the 
> following static routes:
>10.30.52.0/24 at 10.30.51.28 // tb-4virl1 mgmt addr
>10.30.53.0/24 at 10.30.51.29 // tb-4virl1 mgmt addr
>10.30.54.0/24 at 10.30.51.30 // tb-4virl1 mgmt addr
>Meaning all packets to above subnets will be routed through the main 
> management IP address on respective tb4-virl host, per wiki [1].
>This will remove default gateway router from the problem domain under 
> investigation.
> 

this is all well and good


> Remove following IP addresses
> from the default gateway router:
>10.30.52.1/24
>10.30.53.1/24
>10.30.54.1/24
> 

Not sure how this got in there….we want to keep these right where they are 
unless there is some reason to remove them


> Continue to advertise below routes into WAN to ensure reachability from 
> Jenkins slave and LF FD.io infra:
>10.30.52.0/24
>10.30.53.0/24
>10.30.54.0/24
> 


this is also correct...

> Could you pls advise when can these be conducted?
> 
> -Maciek
> 
> [1] 
> https://wiki.fd.io/view/CSIT/CSIT_LF_testbed#Management_VLAN_IP_Addresses_allocation
> 
>> On 21 Jun 2017, at 16:00, Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at 
>> Cisco)  wrote:
>> 
>> Hello Anton,
>> 
>> We did some checks and here are results:
>> 
>> 1. ping simulated node from the host itself - ping is OK
>> 
>> 2. ping simulated node from other host (i.e. node simulated on virl2, 
>> executing ping command on virl3) - discovered packet loss (see e-mail from 
>> Peter below)
>>- even for successful ping packet transition we can see the wide range of 
>> time - from cca 0,6ms to 45ms...
>> 
>> We are still investigating VIRL settings but do you have some hints for us?
>> 
>> Thanks,
>> Jan
>> 
>> -Original Message-
>> From: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>> Sent: Wednesday, June 21, 2017 15:20
>> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
>> 
>> Subject: RE: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones 
>> between jenkins executor and VIRL servers
>> 
>> virl@t4-virl3:/home/testuser$ ping 10.30.51.127
>> PING 10.30.51.127 (10.30.51.127) 56(84) bytes of data.
>> 64 bytes from 10.30.51.127: icmp_seq=54 ttl=64 time=1.86 ms
>> ...
>> ^C
>> --- 10.30.51.127 ping statistics ---
>> 1202 packets transmitted, 193 received, 83% packet loss, time 1202345ms
>> rtt min/avg/max/mdev = 0.369/0.736/3.271/0.509 ms
>> virl@t4-virl3:/home/testuser$ ping 10.30.51.29
>> 
>> Peter Mikus
>> Engineer - Software
>> Cisco Systems Limited
>> 
>> -Original Message-
>> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
>> Behalf Of Jan Gelety -X via RT
>> Sent: Tuesday, June 20, 2017 5:20 PM
>> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones 
>> between jenkins executor and VIRL servers
>> 
>> Hello Anton,
>> 
>> Thanks for the fast response. We will check local firewall setting as you 
>> proposed.
>> 
>> Regards,
>> Jan
>> 
>> -Original Message-
>> From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
>> Sent: Tuesday, June 20, 2017 17:13
>> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
>> 
>> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
>> Subject: [FD.io Helpdesk #41921] connection interruptiones between jenkins 
>

Re: [vpp-dev] Building on Fedora 24

2017-06-28 Thread Luke, Chris
No, it’s an artifact of a recent merge that enables the console socket as a 
unix-domain socket by default. Will propose a patch to remedy this shortly.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Tomas Brännström
Sent: Wednesday, June 28, 2017 8:19
To: Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco) 
; Burt Silverman 
Cc: vpp-dev 
Subject: Re: [vpp-dev] Building on Fedora 24

Sorry for spamming, but using strace I noticed that vppctl tried to connect to 
a socket file in /run/vpp

This folder didn't exist, but when I created it, vpp could start successfully. 
There's no such folder in my Ubuntu install, so is this exclusive to RHEL based 
distros?

/Tomas

On 28 June 2017 at 13:21, Tomas Brännström 
mailto:tomas.a.brannst...@tieto.com>> wrote:
We targeted CentOS instead and there it works fine to build and install.

However, there's some other issue now. When I start the VPP service, it looks 
like it immediately dies:

$ sudo service vpp start
Redirecting to /bin/systemctl start  vpp.service
$ sudo service vpp status
Redirecting to /bin/systemctl status  vpp.service
● vpp.service - Vector Packet Processing Process
   Loaded: loaded (/usr/lib/systemd/system/vpp.service; disabled; vendor 
preset: disabled)
   Active: inactive (dead)

This error message is printed:
vpp[5859]: clib_socket_init: bind: No such file or directory

This is in a Virtualbox VM. Not sure how to proceed since usually a better 
error message is printed when it fails (for example missing drivers and 
whatnot).

/Tomas

On 27 June 2017 at 21:17, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at 
Cisco) mailto:ksek...@cisco.com>> wrote:
Unfortunately, I'm no rpm packaging expert, somebody else will have to
chime in...

Thanks,
Klement

Quoting Tomas Brännström (2017-06-27 17:18:40)
>I got the same error :-(
>make bootstrap and build works, but building rpg packages fail...
>/Tomas
>On 27 June 2017 at 16:25, Klement Sekera -X (ksekera - PANTHEON
>TECHNOLOGIES at Cisco) <[1]ksek...@cisco.com> 
> wrote:
>
>  Hi Tomas,
>
>  could you please take a look at the main Makefile:
>
>  57 ifeq ($(OS_ID)-$(OS_VERSION_ID),fedora-25)
>  58 RPM_DEPENDS += python2-virtualenv
>  59 RPM_DEPENDS_GROUPS = 'C Development Tools and Libraries'
>  60 else
>  61 RPM_DEPENDS += python-virtualenv
>  62 RPM_DEPENDS_GROUPS = 'Development Tools'
>  63 endif
>
>  see how the fedora-25 is the version which uses python2-virtualenv
>  while all others use python-virtualenv? Could you please change
>  fedora-25
>  to fedora-24 on line 57 and let us know if this smooths things out?
>  Maybe the fix is a simple one-liner...
>
>  Thanks,
>  Klement
>
>  Quoting Tomas Brännström (2017-06-27 16:08:31)
>  >Hello
>  >I'm having some troubles building VPP (latest master) from source
>  on
>  >Fedora 24.
>  >At first when doing `make bootstrap' it complained about not
>  finding
>  >python-virtualenv. I could get around this by changing changing the
>  >Makefile to look for "python2-virtualenv" which was the version
>  that got
>  >installed.
>  >But when doing `make pgk-rpm' I get the following errors:
>  >make[2]: Entering directory
>  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
>  >Please install missing RPMs: \npackage python-virtualenv is not
>  >installed\n
>  >by executing "make install-dep"\n
>  >Makefile:175: recipe for target
>  >
>  '/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.bootstrap.ok'
>  >failed
>  >make[2]: ***
>  >
>  [/home/fedora/git/vpp/extras/rpm/vpp-17.10/build-root/.bootstrap.ok]
>  Error
>  >1
>  >make[2]: Leaving directory
>  '/home/fedora/git/vpp/extras/rpm/vpp-17.10'
>  >error: Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
>  >RPM build errors:
>  >Macro %python2_minor_version defined but not used within scope
>  >Bad exit status from /var/tmp/rpm-tmp.qSFuzD (%build)
>  >Makefile:22: recipe for target 'all' failed
>  >make[1]: *** [all] Error 1
>  >make[1]: Leaving directory '/home/fedora/git/vpp/extras/rpm'
>  >Makefile:397: recipe for target 'pkg-rpm' failed
>  >make: *** [pkg-rpm] Error 2
>  >I tried changing to python2-virtualenv in that Makefile as well but
>  it
>  >seems to change back into python-virtualenv, and besides, there
>  seems to
>  > be other problems here as well.
>  >Is there a workaround for this or is  Fedora 24 simply not
>  supported?
>  >/Tomas
>
> References
>
>Visible links
>1. mailto:ksek...@cisco.com


___
vpp-dev mailing list

Re: [vpp-dev] [csit-dev] [FD.io Helpdesk #41921] connection interruptiones between jenkins executor and VIRL servers

2017-06-28 Thread Maciek Konstantynowicz via RT
Ah well, I assumed the default gateway router is running IP routing stack 
compliant with RFC791 and associated best practices of selecting best routes 
for forwarding.
If this is not the case, then I would like the administrator of the 
aforementioned default gateway router to confirm the implementation and ensure 
that configuration is matching test behaviour described in my email below.

-Maciek

> On 28 Jun 2017, at 14:03, Ed Kern (ejk)  wrote:
> 
> 
> Hey maciek,
> 
> We don’t need and would prefer NOT to remove that .1 addresses on those 
> virtual routers.
> 
> Just the addition of the static is whats needed right now..
> 
> thanks,
> 
> Ed
> 
> 
> 
>> On Jun 28, 2017, at 5:07 AM, Maciek Konstantynowicz (mkonstan) 
>>  wrote:
>> 
>> Anton and Team,
>> 
>> The continued interruptions of IP connectivity to/from VIRL server 
>> simulations on the management subnet have been impacting both CSIT and VPP 
>> project operations. We decided to temporarily remove VPP VIRL based verify 
>> jobs, job/vpp-csit-verify-virl-master/, from both per vpp patch auto-trigger 
>> and the voting rights - Ed W. was kind to prepared required ci-mgmt patches, 
>> but they are not merged yet (https://gerrit.fd.io/r/#/c/7319/, 
>> https://gerrit.fd.io/r/#/c/7320/).
>> 
>> Before we proceed with above step, we want to do one more set of network 
>> infra focused tests per yesterday exchange on #fdio-infra irc with 
>> Vanessa/valderrv, Ed Kern/snergster and Mohammed/mnaser. Here quick recap:
>> 
>> Connectivity is affected between following the mgmt subnets added few weeks 
>> back as part of  [FD.io Helpdesk #40733]:
>>   10.30.52.0/24
>>   10.30.53.0/24
>>   10.30.54.0/24
>> 
>> The high packet drop rate (50..70%) problem seem to occur sporadically, but 
>> if packets are passing thru the default gateway router that has address .1 
>> in each of above subnets. This affects all connectivity to jenkins slaves, 
>> but also between tb4 virl hosts. The problem is never observed if packets 
>> are sent directly between the hosts, it works fine.
>> 
>> Test proposal:
>> 
>> Configure the router that acts as default gateway for these subnet with the 
>> following static routes:
>>   10.30.52.0/24 at 10.30.51.28 // tb-4virl1 mgmt addr
>>   10.30.53.0/24 at 10.30.51.29 // tb-4virl1 mgmt addr
>>   10.30.54.0/24 at 10.30.51.30 // tb-4virl1 mgmt addr
>>   Meaning all packets to above subnets will be routed through the main 
>> management IP address on respective tb4-virl host, per wiki [1].
>>   This will remove default gateway router from the problem domain under 
>> investigation.
>> 
> 
> this is all well and good
> 
> 
>> Remove following IP addresses
>> from the default gateway router:
>>   10.30.52.1/24
>>   10.30.53.1/24
>>   10.30.54.1/24
>> 
> 
> Not sure how this got in there….we want to keep these right where they are 
> unless there is some reason to remove them
> 
> 
>> Continue to advertise below routes into WAN to ensure reachability from 
>> Jenkins slave and LF FD.io infra:
>>   10.30.52.0/24
>>   10.30.53.0/24
>>   10.30.54.0/24
>> 
> 
> 
> this is also correct...
> 
>> Could you pls advise when can these be conducted?
>> 
>> -Maciek
>> 
>> [1] 
>> https://wiki.fd.io/view/CSIT/CSIT_LF_testbed#Management_VLAN_IP_Addresses_allocation
>> 
>>> On 21 Jun 2017, at 16:00, Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at 
>>> Cisco)  wrote:
>>> 
>>> Hello Anton,
>>> 
>>> We did some checks and here are results:
>>> 
>>> 1. ping simulated node from the host itself - ping is OK
>>> 
>>> 2. ping simulated node from other host (i.e. node simulated on virl2, 
>>> executing ping command on virl3) - discovered packet loss (see e-mail from 
>>> Peter below)
>>>   - even for successful ping packet transition we can see the wide range of 
>>> time - from cca 0,6ms to 45ms...
>>> 
>>> We are still investigating VIRL settings but do you have some hints for us?
>>> 
>>> Thanks,
>>> Jan
>>> 
>>> -Original Message-
>>> From: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>>> Sent: Wednesday, June 21, 2017 15:20
>>> To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
>>> 
>>> Subject: RE: [vpp-dev] [FD.io Helpdesk #41921] connection interruptiones 
>>> between jenkins executor and VIRL servers
>>> 
>>> virl@t4-virl3:/home/testuser$ ping 10.30.51.127
>>> PING 10.30.51.127 (10.30.51.127) 56(84) bytes of data.
>>> 64 bytes from 10.30.51.127: icmp_seq=54 ttl=64 time=1.86 ms
>>> ...
>>> ^C
>>> --- 10.30.51.127 ping statistics ---
>>> 1202 packets transmitted, 193 received, 83% packet loss, time 1202345ms
>>> rtt min/avg/max/mdev = 0.369/0.736/3.271/0.509 ms
>>> virl@t4-virl3:/home/testuser$ ping 10.30.51.29
>>> 
>>> Peter Mikus
>>> Engineer - Software
>>> Cisco Systems Limited
>>> 
>>> -Original Message-
>>> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
>>> Behalf Of Jan Gelety -X via RT
>>> Sent: Tuesday, June 20, 2017 5:20 PM
>>> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
>>> 

[vpp-dev] Packet Trace API?

2017-06-28 Thread Jon Loeliger
Hey vpp-dev,

Is there no API to enable packet tracing, and then get
the trace buffer(s)?  Is there any plan to do so?

Or is this the whole point of the ioam code and I just
don't see the connection yet?  They seem different to me.

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] VPP interface parsing error?

2017-06-28 Thread Yichen Wang (yicwang)
Hi, VPP folks,

I am on latest VPP 17.07-rc1, and saw an interface parsing error.

vpp# show int
  Name   Idx   State  Counter  Count
BondEthernet0 3 up   rx packets
1955066091
 rx bytes
125124243370
 tx packets
1508472227
 tx bytes 
9654800
 drops  
  317
 rx-miss   
1328405166
BondEthernet0.15274 up   rx packets
1422663050
 rx bytes 
91050435200
 tx packets 
532269842
 tx bytes 
34065269888
 drops  
1
BondEthernet0.15146 up   rx packets 
532402727
 rx bytes 
34073774528
 tx packets 
976202368
 tx bytes 
62476951552
 drops  
2
TenGigabitEthernet81/0/1  2 bond-slave   rx-miss   
1109342837
TenGigabitEthernete/0/1   1 bond-slave   rx-miss
219062329
VirtualEthernet0/0/0  5 up   rx packets 
532269844
 rx bytes 
31936190640
 tx packets
1422663049
 tx bytes 
85359782940
 drops  
430668918
VirtualEthernet0/0/1  7 up   rx packets 
976202370
 rx bytes 
58572142200
 tx packets 
532402725
 tx bytes 
31944163500
 drops  
27063
local00down

vpp# show interface rx-pl
Thread 1 (vpp_wk_0):
  node dpdk-input:
TenGigabitEthernete/0/1 queue 0 (polling)
TenGigabitEthernet81/0/1 queue 0 (polling)
BondEthernet0 queue 0 (polling)
Thread 2 (vpp_wk_1):
  node dpdk-input:
TenGigabitEthernete/0/1 queue 1 (polling)
TenGigabitEthernet81/0/1 queue 1 (polling)
BondEthernet0 queue 1 (polling)
Thread 3 (vpp_wk_2):
  node vhost-user-input:
BondEthernet0.1527 queue 0 (polling)
Thread 4 (vpp_wk_3):
  node vhost-user-input:
VirtualEthernet0/0/0 queue 0 (polling)

It is clearly wrong for the CLI above. BondEthernet0.1527 cannot be in 
vhost-user-input node. Also “show vhost” also confirms that Thread 3 should 
have VirtualEthernet0/0/0 and Thread 4 should have VirtualEthernet0/0/1. So I 
believe this is just the names shown here are wrong, but this will be really 
confusing… ☺

Thanks very much!

Regards,
Yichen
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev