[vpp-dev] wrong when i want to change vlib_buffer_t->data to my pointer

2017-02-28 Thread Li, Rujun
Hi ,all

Now I am working on VPP and I met a problem. I have a pointer pointing to 
packets receive from my changed DPDK and now I want to change 
vlib_buffer_t->data to my pointer to avoid memory copy, but the errors were 
reported that the vlib_buffer_t->data cannot be used as left value. My code 
looks like:

Uint8_t * pktbuf;
//do assignment to pktbuf
Vlib_buffer_t * b = vlib_get_buffer(vm, tm->rx_buffers[i_rx]);
b->data = pktbuf;

and the error is : error: lvalue required as left operand of assignment
is there is something wrong with my code or vlib_buffer_t is projected from 
assignment?


Best Wishes,
Rujun, Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] reset_fib API issue in case of IPv6 FIB

2017-02-28 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello Neale,

Thank you very much for the clarification. I’ll update VRF tests based on this 
information.

Regards,
Jan

From: Neale Ranns (nranns)
Sent: Monday, February 27, 2017 16:49
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco) 
; vpp-dev@lists.fd.io
Cc: csit-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hi Jan,

The remaining prefixes are those added via ‘set int ip addr XXX’
These will not be removed by the FIB flush, they need to be removed via the 
‘set int del ip addr …’.
The same interface del should happen for v6 too. Otherwise VPP holds stale 
state (even though it may not be visible in the FIB),

Regards,
neale

From: mailto:csit-dev-boun...@lists.fd.io>> on 
behalf of "Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)" 
mailto:jgel...@cisco.com>>
Date: Monday, 27 February 2017 at 15:38
To: "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Cc: "csit-...@lists.fd.io" 
mailto:csit-...@lists.fd.io>>
Subject: Re: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello vpp developers,

Thanks to Neal’s patch the reset of IPv6 FIB works well. Unfortunately I still 
have doubts regarding reset of IPv4 FIB – there are still remaining some routes 
for unicast IPv4 addresses in the VRF after its reset (tested on ubuntu16.04 VM 
with vpp_lite). Could somebody of vpp developers let me know if the current 
behaviour is correct, please?

Thanks,
Jan

IPv4 VRF1 before reset:
ipv4-VRF:1, fib_index 1, flow hash: src dst sport dport proto
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
[0] [@0]: dpo-drop ip4
172.16.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:22 buckets:1 uRPF:20 to:[0:0]]
[0] [@2]: dpo-receive: 172.16.1.1 on pg0
172.16.1.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:21 buckets:1 uRPF:19 to:[0:0]]
[0] [@4]: ipv4-glean: pg0
172.16.1.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:23 buckets:1 uRPF:21 to:[0:0]]
[0] [@5]: ipv4 via 172.16.1.2 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:02
172.16.1.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:24 buckets:1 uRPF:22 to:[0:0]]
[0] [@5]: ipv4 via 172.16.1.3 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:03
172.16.1.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:25 buckets:1 uRPF:23 to:[0:0]]
[0] [@5]: ipv4 via 172.16.1.4 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:04
172.16.1.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:26 buckets:1 uRPF:24 to:[0:0]]
[0] [@5]: ipv4 via 172.16.1.5 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:05
172.16.1.6/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:27 buckets:1 uRPF:25 to:[0:0]]
[0] [@5]: ipv4 via 172.16.1.6 pg0: IP4: 02:fe:5e:14:60:d7 -> 
02:01:00:00:ff:06
172.16.2.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:28 buckets:1 uRPF:26 to:[0:0]]
[0] [@4]: ipv4-glean: pg1
172.16.2.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:29 buckets:1 uRPF:27 to:[0:0]]
[0] [@2]: dpo-receive: 172.16.2.1 on pg1
172.16.2.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:30 buckets:1 uRPF:28 to:[0:0]]
[0] [@5]: ipv4 via 172.16.2.2 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:02
172.16.2.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:31 buckets:1 uRPF:29 to:[0:0]]
[0] [@5]: ipv4 via 172.16.2.3 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:03
172.16.2.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:32 buckets:1 uRPF:30 to:[0:0]]
[0] [@5]: ipv4 via 172.16.2.4 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:04
172.16.2.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:33 buckets:1 uRPF:31 to:[0:0]]
[0] [@5]: ipv4 via 172.16.2.5 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:05
172.16.2.6/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:34 buckets:1 uRPF:32 to:[0:0]]
[0] [@5]: ipv4 via 172.16.2.6 pg1: IP4: 02:fe:f6:5c:24:b7 -> 
02:02:00:00:ff:06
172.16.3.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:35 buckets:1 uRPF:33 to:[0:0]]
[0] [@4]: ipv4-glean: pg2
172.16.3.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:36 buckets:1 uRPF:34 to:[0:0]]
[0] [@2]: dpo-receive: 172.16.3.1 on pg2
172.16.3.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:37 buckets:1 uRPF:35 to:[0:0]]
[0] [@5]: ipv4 via 172.16.3.2 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:02
172.16.3.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:38 buckets:1 uRPF:36 to:[0:0]]
[0] [@5]: ipv4 via 172.16.3.3 pg2: IP4: 02:fe:98:fe:c8:77 -> 
02:03:00:00:ff:03
172.16.3.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:39 buckets:1 uRPF:37 to:[0:0]]
[0] [@5]: ipv4 via 172.16.3.4 pg2: IP4: 02:fe:98:fe:c8:77 ->

[vpp-dev] VPP 17.04: Week 9 status

2017-02-28 Thread otroan
Hi,

As the designated release manager for 17.04. Here is summary of what happened 
over the last two weeks.

Please remember the API freeze date on March 8th.

JIRA:
  Jong has created a new dashboard:
 https://jira.fd.io/secure/Dashboard.jspa?selectPageId=10700
  The idea is to map the JIRA components to the components in the MAINTAINER 
file.
  (Which I believe Damjan will post something for review on gerrit very soon).

Coverity:
  https://scan9.coverity.com/reports.htm#v30797/p12999
 - 18 outstanding issues
   LISP, API, L2 VTR, SR, IPsec
 I'll run git blame on these and nag people accordingly (if you don't mind too 
much).


67 commits.

f61bc52  BFD: disable debug prints (Klement Sekera)
ff54270  vlib: add VLIB_BUFFER_EXT_HDR_VALID flag (Neale Ranns)
39f9d8b  [Proxy] ARP tests (Damjan Marion)
3e7b569  Add GPE CLI/API for setting encap mode (Florin Coras)
646ba9b  fix:vxlan mcast adj - added as ucast dpo adj (Neale Ranns)
a83bc30  Load plugins in alphabetical order (Florin Coras)
239790f  BFD: echo function (Damjan Marion)
263440e  Add NSH to GPE decap path (Florin Coras)
5a72c1c  MFIB: changes to improve route add/delete performance (Damjan Marion)
b5b2ef5  Enable tests with VRF reset (Neale Ranns)
08a70f1  FIB: 1) fix pool realloc during prefix export. 2) don't walk off the 
end of the path-extension vector (Neale Ranns)
26cd8c1  VPP-650: handle buffer failure in vlib_buffer_copy(...) (Dave Barach)
02767e9  Fixed QAT device binding and device unbinding when vpp package is 
removed (Dave Barach)
bcc6aa4  MFIB memory leak. free the per-source interface hash (Florin Coras)
04197ee  VPP-279: Document changes for vnet/vnet/devices (Damjan Marion)
4a3f69c  Fix vpp built-in version of api_unformat_sw_if_index(...) (Florin 
Coras)
22dc9df  Remove prints from LISP test (Florin Coras)
def19da  Clean up "binary-api" help string, arg parse bugs (Dave Barach)
974cdc6  Fix LISP and ONE crc marcos (Florin Coras)
04f8d3f  Support multiple plugin build in the sample-plugin (Damjan Marion)
a9a20e7  VPP-635: CLI Memory leak with invalid parameter (Dave Barach)
2291a36  jvpp: remove unnecessary msg_id_base caching (Damjan Marion)
2dd6852  Consolidate DHCP v4 and V6 implementation. No functional change 
intended (Neale Ranns)
c8c5335  Add ref to test framework docs in doxygen output. (Dave Barach)
2d41f16  fix trace frame-queue unformat of index (Dave Barach)
954898f  Fix last run time update for timer wheel (Dave Barach)
d456020  Repair SNAT's IPFIX and IF-add-del test functions. (Dave Barach)
694396d  Add Overlay Network Engine API (Florin Coras)
20a175a  dhcp: multiple additions (Damjan Marion)
2e3677b  cryptodev:  Automatically download and build ISA-L Crypto library 
(Damjan Marion)
770e89e  Add basic 4o4 LISP unit test (Damjan Marion)
057bb8c  test: ip6 vrf instances multi-context test (CSIT-497) (Damjan Marion)
82786c4  Rename LISP GPE API to GPE (Florin Coras)
65e8457  VPP-540 : pbb tag rewrite details (John Lo)
9745ace  Update CSIT tests 170213 -> 170220 (Dave Wallace)
a8d9f30  FIB reset leaves residual routes. Wrong API used to remove the routes 
meant the lock count on the entry did not drop to zero (Florin Coras)
38206ee  LISP: don't show PITR generated mapping in dump call (Filip Tehlar)
6ca42d3  dpdk: updated build to automatically download Intel(R) Multi-Buffer 
Crypto for IPsec Library (Damjan Marion)
52456fc  CLI extension to add multiple (S,G)s at once and time it (Florin Coras)
5a8123b  Python test IP and MPLS objects conform to infra. Add IP[46] MFIB 
dump. (Florin Coras)
90c5572  make test: save + dump VPP api trace log; VPP-640 (Damjan Marion)
6cfc39c  Remove duplicate ip6 get interface address code (Damjan Marion)
c48829b  BFD: put session admin-up/admin-down (Damjan Marion)
cb33dc2  Implemented IKEv2 initiator features: - IKE_SA_INIT and IKE_AUTH 
initial exchanges - Delete IKA SA - Rekey and delete Child SA - Child SAs 
lifetime policy (Damjan Marion)
665e482  Fix handling of ping to SNAT out interface (Damjan Marion)
52a047a  ipsec: changed ipsec-input-ip6 node to be a sibling of 
ipsec-input-ip4, fixes a problem that occurs with cryptodev ipv6 input. (Damjan 
Marion)
681abe4  l2 input: avoid per-packet trace checks in the fast path (Damjan 
Marion)
6e9cf3b  Fix comment for num-mbufs default in startup.conf (Dave Barach)
5d81f45  dpdk: quad loop and prefetch in fill_free_list (Dave Barach)
4983e47  dpdk: bump to DPDK 17.02 (Dave Barach)
7be8a88  ioam: declare export_node instead of defining it in header file (Dave 
Barach)
d65c18d  api: remove debug print in api_main_init (Dave Barach)
83ed1f4  tw_timer_expire_timers() - add a maximum to the number of expiration 
per call (Damjan Marion)
b69111e  Add NSH load-balance and drop DPO (Damjan Marion)
6085e16  Fix NSH-LISP interface addition (Damjan Marion)
7bc7310  Fix crash on deleting previously activated IPv6 interface - VPP-636 
(John Lo)
a4cb12b  Fix sample plugin breakage. (Damjan Marion)
b33f413  Add handling of ICMP error packets in SNA

Re: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Dear all,

after quick look at the logs,
it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom deploy 
failure.

As a consequence hc2vpp integration jobs are not triggered, which causes 
honeycomb packages
being out of synch much more often.

The issue was also reported by our team as #37115 some time ago...

Regards,
Marek

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: 16 lutego 2017 04:17
To: helpd...@fd.io; Ed Warnicke 
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing

Dear helpd...@fd.io,

All of the vpp-merge-master-{os} jobs have been returning 'failure' for the 
past 8 days:

[cid:image001.png@01D291BC.8904FB90]

This is causing the downstream vpp-docs-merge-master and 
vpp-make-test-docs-merge-master to NOT be scheduled.

Why are the docs merge jobs triggered off of the successful completion of 
'vpp-merge-master-ubuntu1404' instead of being triggered by 
'gerrit-trigger-patch-merged' and run in parallel with the other merge jobs?

Also, why are the docs merge jobs being triggered off of the vpp ubuntu1404 
merge job when the os-parameter is being set to 'ubuntu1604'?

Thanks,
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #36808] [linuxfoundation.org #36808] RE: vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread mgrad...@cisco.com via RT
Dear all,

after quick look at the logs,
it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom deploy 
failure.

As a consequence hc2vpp integration jobs are not triggered, which causes 
honeycomb packages
being out of synch much more often.

The issue was also reported by our team as #37115 some time ago...

Regards,
Marek

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: 16 lutego 2017 04:17
To: helpd...@fd.io; Ed Warnicke 
Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing

Dear helpd...@fd.io,

All of the vpp-merge-master-{os} jobs have been returning 'failure' for the 
past 8 days:

[cid:image001.png@01D291BC.8904FB90]

This is causing the downstream vpp-docs-merge-master and 
vpp-make-test-docs-merge-master to NOT be scheduled.

Why are the docs merge jobs triggered off of the successful completion of 
'vpp-merge-master-ubuntu1404' instead of being triggered by 
'gerrit-trigger-patch-merged' and run in parallel with the other merge jobs?

Also, why are the docs merge jobs being triggered off of the vpp ubuntu1404 
merge job when the os-parameter is being set to 'ubuntu1604'?

Thanks,
-daw-

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] wrong when i want to change vlib_buffer_t->data to my pointer

2017-02-28 Thread Luke, Chris
Look at the definition of 'data' in the .h or in the docs: 
https://docs.fd.io/vpp/17.04/dd/d10/structvlib__buffer__t.html

It's not a pointer, ergo you cannot do this. That is not how the buffer 
mechanism works.

Chris.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Li, Rujun
Sent: Tuesday, February 28, 2017 3:44
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] wrong when i want to change vlib_buffer_t->data to my pointer

Hi ,all

Now I am working on VPP and I met a problem. I have a pointer pointing to 
packets receive from my changed DPDK and now I want to change 
vlib_buffer_t->data to my pointer to avoid memory copy, but the errors were 
reported that the vlib_buffer_t->data cannot be used as left value. My code 
looks like:

Uint8_t * pktbuf;
//do assignment to pktbuf
Vlib_buffer_t * b = vlib_get_buffer(vm, tm->rx_buffers[i_rx]);
b->data = pktbuf;

and the error is : error: lvalue required as left operand of assignment
is there is something wrong with my code or vlib_buffer_t is projected from 
assignment?


Best Wishes,
Rujun, Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] system func in vpp

2017-02-28 Thread Luke, Chris
Using any syscall needs thinking through from inside VPP. Using system() or 
similar is simply ill-advised; what you’re doing is best done outside of VPP 
where the penalty of executing fork() is much diminished.

If you really must create these interfaces from inside VPP then I suggest you 
look up the netlink ABI in Linux, which is what the ‘ip’ program uses, and 
program it directly.

Chris.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of yug...@telincn.com
Sent: Monday, February 27, 2017 22:56
To: Ed Warnicke 
Cc: vpp-dev 
Subject: Re: [vpp-dev] system func in vpp

I would like to add veth interface in vpp for every phy interface, " system("ip 
link add vpp type veth peer name kernel-vpp") ".
It's convenient  to do this from vpp.

Regards,
Ewan.

yug...@telincn.com

From: Ed Warnicke
Date: 2017-02-28 11:44
To: yug...@telincn.com
CC: vpp-dev
Subject: Re: Re: [vpp-dev] system func in vpp
Why would you do that from within vpp?

Ed

On Mon, Feb 27, 2017 at 8:36 PM, yug...@telincn.com 
mailto:yug...@telincn.com>> wrote:
"int system(const char *command)"

I would like to use this func to start another dameon , such as "  
system("zebra -d")  ".

Regards,
Ewan

yug...@telincn.com

From: Ed Warnicke
Date: 2017-02-28 11:15
To: yug...@telincn.com
CC: vpp-dev
Subject: Re: [vpp-dev] system func in vpp
I'm not quite sure what you mean by the 'func system'...

Ed

On Mon, Feb 27, 2017 at 7:26 PM, yug...@telincn.com 
mailto:yug...@telincn.com>> wrote:
Hi, all

Does vpp can't use func  system? Any reason to this or what can i do  if i 
really need ?


yug...@telincn.com

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] system func in vpp

2017-02-28 Thread Dave Barach (dbarach)
See also src/vnet/unix/tuntap.c, which seems to already do pretty much what 
you’re describing. It’s disabled by default. Use the command-line argument 
“tuntap { enable }” to kick the tires.

In addition to the concerns which Chris mentioned, adding ‘system(“foo”)’ calls 
to vpp is not a security best practice.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, February 28, 2017 7:18 AM
To: yug...@telincn.com; Ed Warnicke 
Cc: vpp-dev 
Subject: Re: [vpp-dev] system func in vpp

Using any syscall needs thinking through from inside VPP. Using system() or 
similar is simply ill-advised; what you’re doing is best done outside of VPP 
where the penalty of executing fork() is much diminished.

If you really must create these interfaces from inside VPP then I suggest you 
look up the netlink ABI in Linux, which is what the ‘ip’ program uses, and 
program it directly.

Chris.

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of 
yug...@telincn.com
Sent: Monday, February 27, 2017 22:56
To: Ed Warnicke mailto:hagb...@gmail.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] system func in vpp

I would like to add veth interface in vpp for every phy interface, " system("ip 
link add vpp type veth peer name kernel-vpp") ".
It's convenient  to do this from vpp.

Regards,
Ewan.

yug...@telincn.com

From: Ed Warnicke
Date: 2017-02-28 11:44
To: yug...@telincn.com
CC: vpp-dev
Subject: Re: Re: [vpp-dev] system func in vpp
Why would you do that from within vpp?

Ed

On Mon, Feb 27, 2017 at 8:36 PM, yug...@telincn.com 
mailto:yug...@telincn.com>> wrote:
"int system(const char *command)"

I would like to use this func to start another dameon , such as "  
system("zebra -d")  ".

Regards,
Ewan

yug...@telincn.com

From: Ed Warnicke
Date: 2017-02-28 11:15
To: yug...@telincn.com
CC: vpp-dev
Subject: Re: [vpp-dev] system func in vpp
I'm not quite sure what you mean by the 'func system'...

Ed

On Mon, Feb 27, 2017 at 7:26 PM, yug...@telincn.com 
mailto:yug...@telincn.com>> wrote:
Hi, all

Does vpp can't use func  system? Any reason to this or what can i do  if i 
really need ?


yug...@telincn.com

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] wrong when i want to change vlib_buffer_t->data to my pointer

2017-02-28 Thread Dave Barach (dbarach)
Please explain what you're trying to do in detail. There should be no need to 
ever do something like that.

Vlib_buffer_t's are embedded in private headroom space, tangent to the packet 
DMA target [which starts at (struct mbuf *)mb->data_off].

Here's how we create the mempools:

  rmp = rte_pktmbuf_pool_create ((char *) pool_name,   /* pool name */
  num_mbufs, /* number of mbufs */
  512, /* cache size */
  VLIB_BUFFER_HDR_SIZE, /* priv size */
  VLIB_BUFFER_PRE_DATA_SIZE + VLIB_BUFFER_DATA_SIZE, /* 
dataroom size */
  socket_id);/* cpu socket */

As Chris wrote, b0->data is an array, not a pointer:

  u8 data[0]; /**< Packet data. Hardware DMA here */

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, February 28, 2017 7:15 AM
To: Li, Rujun ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] wrong when i want to change vlib_buffer_t->data to my 
pointer

Look at the definition of 'data' in the .h or in the docs: 
https://docs.fd.io/vpp/17.04/dd/d10/structvlib__buffer__t.html

It's not a pointer, ergo you cannot do this. That is not how the buffer 
mechanism works.

Chris.

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Li, Rujun
Sent: Tuesday, February 28, 2017 3:44
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] wrong when i want to change vlib_buffer_t->data to my pointer

Hi ,all

Now I am working on VPP and I met a problem. I have a pointer pointing to 
packets receive from my changed DPDK and now I want to change 
vlib_buffer_t->data to my pointer to avoid memory copy, but the errors were 
reported that the vlib_buffer_t->data cannot be used as left value. My code 
looks like:

Uint8_t * pktbuf;
//do assignment to pktbuf
Vlib_buffer_t * b = vlib_get_buffer(vm, tm->rx_buffers[i_rx]);
b->data = pktbuf;

and the error is : error: lvalue required as left operand of assignment
is there is something wrong with my code or vlib_buffer_t is projected from 
assignment?


Best Wishes,
Rujun, Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-28 Thread Karl Rister
This thread topology is one of the highest priority requests that our
product team is placing on our test efforts.  We would like to be able
to rely on the numbers that CSIT is generating but this concept in
particular is a sticking point.  We want to know the best per-core
throughput and our testing so far has shown that HT is a significant
piece of achieving that.

I agree that the results are surprising, my working theory that I have
not yet investigated is comprised of 2 factors:

1) that there is a lot of hardware/memory induced instruction latency
that allows the siblings to share the core very efficiently

2) processing the hardware and virtual queues hits different and/or
replicated functional units allowing more efficient sharing

Karl


On 02/27/2017 11:58 AM, Alec Hothan (ahothan) wrote:
>  
> 
> It is actually pretty surprising and amazing that you see an almost
> linear scale when using hyper threads the way you do (pairing the 2
> sibling threads on 1 phys internet queue and 1 vhost queue). It is more
> common to see a factor of 1.2 to 1.4 when combining 2 sibling threads
> (as compared to using 1 core without hyper-threading). I think even
> Intel could not hope to see such a good efficiency ;-)
> 
> It will be difficult to replicate this in a real openstack node given
> that the number of vhost interfaces will likely be larger than the
> number of cores assigned to the vswitch.
> 
>  
> 
> I’m not sure how the CSIT test configures the cores and I suspect it is
> as you describe.
> 
>  
> 
> Thanks
> 
>Alec
> 
>  
> 
>  
> 
>  
> 
> *From: *Karl Rister 
> *Organization: *Red Hat
> *Reply-To: *"kris...@redhat.com" 
> *Date: *Monday, February 20, 2017 at 11:29 AM
> *To: *Thomas F Herbert , "Alec Hothan
> (ahothan)" , "Maciek Konstantynowicz (mkonstan)"
> 
> *Cc: *Andrew Theurer , Douglas Shakshober
> , "csit-...@lists.fd.io" ,
> vpp-dev , "Michael Pedersen -X (michaped -
> Intel at Cisco)" 
> *Subject: *Re: [vpp-dev] Interesting perf test results from Red
> Hat's test team
> 
>  
> 
> On 02/20/2017 08:43 AM, Thomas F Herbert wrote:
> 
> On 02/17/2017 06:18 PM, Alec Hothan (ahothan) wrote:
> 
>  
> 
> Hi Karl
> 
>  
> 
>   
> 
>  
> 
> Can you also tell which version of DPDK you were using for
> OVS and for
> 
> VPP (for VPP is it the one bundled with 17.01?).
> 
>  
> 
> DPDK 1611 and VPP 1701.
> 
>  
> 
>   
> 
>  
> 
> “The pps is the bi-directional sum of the packets received
> back at the
> 
> traffic generator.”
> 
>  
> 
> Just to make sure….
> 
>  
> 
>   
> 
>  
> 
> If your traffic gen sends 1 Mpps to  each of the 2
> interfaces and you
> 
> get no drop (meaning you receive 1 Mpps from each
> interface). What do
> 
> you report? 2 Mpps or 4 Mpps?
> 
>  
> 
> 2 Mpps
> 
>  
> 
>  
> 
> You seem to say 2Mpps (sum of all RX).
> 
>  
> 
>   
> 
>  
> 
> The CSIT perf numbers report the sum(TX) = in the above
> example CSIT
> 
> reports 2 Mpps.
> 
>  
> 
> The CSIT numbers for 1 vhost/1 VM (practically similar to
> yours) are
> 
> at about half of what you report.
> 
>  
> 
>   
> 
>  
> 
> 
> https://docs.fd.io/csit/rls1701/report/vpp_performance_results_hw/performance_results_hw.html#ge2p1x520-dot1q-l2xcbase-eth-2vhost-1vm-ndrpdrdisc
> 
>  
> 
>   
> 
>  
> 
>   
> 
>  
> 
> scroll down the table to tc13 tc14, 4t4c (4 threads) L2XC,
> 64B NDR,
> 
> 5.95Mpps (aggregated TX of the 2 interfaces) PDR 7.47Mpps.
> 
>  
> 
> while the results in your slides put it at around 11Mpps.
> 
>  
> 
>   
> 
>  
> 
> So either your testbed really switches 2 times more packets
> than the
> 
> CSIT one, or you’re actually reporting double the amount
> compared to
> 
> how CSIT reports it…
> 
>  
> 
> tc13 and tc14 both say "4 threads, 4 phy cores, 2 receive queues per NIC
> 
> port".
> 
>  
> 
> In our configuration when doing doing 2 queue we are actually using 8
> 
> CPU threads on 4 cores -- a dpdk thread on one core thread and a
> 
> vhost-user thread on the other core thread.  Our comparison of 1 thread
> 
> per core versus 2 threads per core

Re: [vpp-dev] FW: Re: Interesting perf test results from Red Hat's test team

2017-02-28 Thread Karl Rister
We actually collected latency data on all of the test runs that we
performed.  There are a couple of sticking points however.

1) Our current packet generator (MoonGen+lua-trafficgen [1]) relies on
the Intel 82599ES PTP hardware in order to measure packet latency.  In
our testing that works very well up to a certain point (somewhere in the
neighborhood of 9.3 Mpps per port) and then it starts to fall apart with
fewer and fewer latency packets being successfully sent out and even
fewer being properly processed on the receive side.  Since our testing
involved multi-queue and VPP scaled so well in many of the different
flow configurations we feel our latency data is incomplete -- for lower
performing configurations we have nice, complete sets of data but for
higher performing configurations (the majority) the data sets are far
from complete.  I believe the trends in the data collected are fairly
representative, but I would prefer to do a real comparison where this
problem does not exist.  This is one reason that we are very interested
in using TRex and/or Intel XL710 adapters.

2) We feel that latency comparisons are best made at like throughput
levels.  If we were truly going to do a latency comparison between two
stacks (in this case OVS and VPP) we would most likely pick a fixed
packet rate.

Karl


[1] https://github.com/atheurer/lua-trafficgen


On 02/27/2017 05:34 PM, Zhou, Danny wrote:
> Tom, more comments inline.
> 
>  
> 
> *From:*Thomas F Herbert [mailto:therb...@redhat.com]
> *Sent:* Tuesday, February 28, 2017 12:41 AM
> *To:* Zhou, Danny ; vpp-dev@lists.fd.io
> *Cc:* Karl Rister ; csit-...@lists.fd.io
> *Subject:* Re: [vpp-dev] FW: Re: Interesting perf test results from Red
> Hat's test team
> 
>  
> 
> Hi Danny,
> 
> My response is inline below.
> 
>  
> 
> On 02/27/2017 03:01 AM, Zhou, Danny wrote:
> 
> Hi Tom, Karl,
> 
>  
> 
> In the last “Todo List” page, there is RFC2544 0% packet loss test
> item, do you want to measure the min/max latency as well?  
> 
> 
> Is your question particularly about vhost-user testing?
> Also, I am interested in why you brought up latency. Are you doing your
> own latency testing in conjunction with your nsh-sfc work and do you
> think we need to do latency in CSIT as well in testing nsh-sfc perf on
> vhosts?
> 
> /[Zhou, Danny]  No, I am not particularly interested in latency about
> vhost_user. Instead, as you mentioned I am interested in VPP as well as
> NSH_SFC latency //J//. Because during our internal P2P NSH_SFC RFC2544
> test without vhost, we observed/*//*/maximum latency is about 100x more
> than minimum latency, which looks like critical issue for some use cases
> that requires real time environment. I would like to know if VPP or DPDK
> has the similar characteristics, or just because I did not use real time
> kernel as you guys did. Anyway, look forward your results, in the
> meantime will share our results once we figure out root cause./
> 
> 
> I will let Karl answer as to the testing he has done and is doing.
> However, in fd.io CSIT, the latency for vhost-user results are here:
> https://docs.fd.io/csit/rls1701/report/vpp_performance_tests/packet_latency_graphs/vm_vhost.html
> 
> --TFH
> 
>  
> 
> -Danny
> 
>  
> 
> *From:*vpp-dev-boun...@lists.fd.io
> 
> [mailto:vpp-dev-boun...@lists.fd.io] *On Behalf Of *Thomas F Herbert
> *Sent:* Friday, February 17, 2017 1:55 AM
> *To:* vpp-dev@lists.fd.io 
> *Cc:* Karl Rister  
> *Subject:* Re: [vpp-dev] FW: Re: Interesting perf test results from
> Red Hat's test team
> 
>  
> 
>  
> 
> Jan,
> I have answered below but am forwarding this to Karl who performed
> the testing to get the exact answers.
> 
> --TFH
> 
> On 02/16/2017 08:59 AM, Jan Scheurich wrote:
> 
> 
> *From:* Jan Scheurich
> *Sent:* Thursday, 16 February, 2017 14:41
> *To:* therb...@redhat.com 
> *Cc:* vpp-...@fd.io 
> *Subject:* Re: [vpp-dev] Interesting perf test results from Red
> Hat's test team
> 
>  
> 
>  
> 
> Hi Thomas,
> 
>  
> 
> Thanks for these interesting measurements. I am not quite sure I
> fully understand the different configurations and traffic cases
> you have been testing:
> 
>  
> 
> ·*/[Zhou, Danny] /*ToDo you vary the number of
> vhost-user queues to the guest and/or the number of RX queues
> for the phy port?
> 
> These are vhost user queues. Because OVS and/or VPP is running in
> the host.
> 
> 
> ·Did you add cores at the same time you added queues?
> 
> Yes
> 
> 
> ·When you say flows, do you mean L3/L4 packet flows
> (5-tuples) or forwarding rules

[vpp-dev] MAINTAINERS file

2017-02-28 Thread Damjan Marion (damarion)

I submitted MAINTAINERS file proposal to gerrit:

https://gerrit.fd.io/r/#/c/5547/

It is not complete, more additions are expected when people self-nominate.

All people on the list are selected based on their contributions and they 
accepted to take this role.

Let me know if any issues before we merge…

Thanks,

Damjan
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] fd.io CSIT vhost-user test scenario implementation priorities

2017-02-28 Thread Liew, Irene
Here are my thoughts and comments on the topologies/test and workloads for CSIT 
vhost-user test scenarios. Pls refer to my comments inline below.

-Original Message-
From: Thomas F Herbert [mailto:therb...@redhat.com] 
Sent: Monday, February 27, 2017 10:04 AM
To: csit-...@lists.fd.io; Maciek Konstantynowicz (mkonstan) 
Cc: vpp-dev ; Liew, Irene ; Pierre 
Pfister (ppfister) ; Alec Hothan (ahothan) 
; Karl Rister ; Douglas Shakshober 
; Andrew Theurer 
Subject: fd.io CSIT vhost-user test scenario implementation priorities

Please weigh in:

We are starting to plan fd.io CSIT Vhost-user test scenario priorities for 
implementation in 17.04 and  in 17.07 CSIT releases.

Vhost-user performance is critical for VNF acceptance in potential use cases 
for VPP/fd.io adaption.

We had previous email thread here: 
https://lists.fd.io/pipermail/csit-dev/2016-November/001192.html along with a 
TWS https://wiki.fd.io/view/TWS meetings on 12/02/16 and 12/07/16
  summarized in this wiki: 
https://wiki.fd.io/view/CSIT/vhostuser_test_scenarios

Topologies and tests

Current in 17.01:

10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-dot1q-l2xcbase-eth-2vhost-1vm
10ge2p1x520-ethip4-ip4base-eth-2vhost-1vm
10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2xcbase-eth-2vhost-1vm
10ge2p1x710-eth-l2bdbasemaclrn-eth-2vhost-1vm
40ge2p1xl710-eth-l2bdbasemaclrn-eth-2vhost-1vm

single and multi-queue

testing of pmd baseline

Proposed in links above
 1p1nic-dot1q-l2bdbase-eth-2vhost-1vm
 1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
 1p1nic-dot1q-l2bdbase-eth-4vhost-2vm-chain
 1p1nic-ethip4vxlan-l2bdbase-eth-4vhost-2vm-chain
 1p1nic-dot1q-l2bdbase-eth-2vhost-1vm-chain-2nodes
 1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm-2nodes

[Irene] For the baseline testing on vhost-user, I would recommend to run core 
scaling from 1 core - max cores for 1 VM Phy-VM-Phy and 2 VMs PVVP. I know the 
current VPP v17.01  did not have the support to manually assign the vhost-user 
ports RXQ to specific cores to ensure load balancing across the cores. And from 
our experience in the lab, when I ran 3-core of work threads in 4vhost-2vm PVVP 
configuration, I observed the ports were unevenly distributed across 3 worker 
threads and VPP vNet suffered in performance scalability. If the manual RXQ 
assignment for vhost-user port feature will be made available in the next 17.04 
or 17.07 release, I strongly propose to include the core scaling of worker 
threads in order to evaluate the vhost-user RXQ core assignment feature. For 
example, we can pick 1 test case of 2 vhost-1vm  and run with configuration of 
1-core, 2-core and 4-core of worker threads. We then pick 1 test case of 
4vhost-2vm-chain and run with configuration of 1-core, 2
 -core, 3-core, 4-core and 6-core of worker threads. 

Proposed topologies for OpenStack from links above:

2p1nic-dot1q-l2bdscale-flows-eth-vhost-vm-chain
2p1nic-ethip4vxlan-l2bdscale-flows-eth-vhost-vm-chain

New scenarios Proposed:

Primary Overlay vxlan and VTEP

 2p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
 2p1nic-ethip4vxlan-l2bdbase-eth-20vhost-10vm

[Irene] There is a trend in the industry using IPv6 over VXLAN. Shall we 
include IPv6 VXLAN scenario too?


MPLS over Ethernet

Scaling Multiple VMs

 2p1nic-dot1q-l2bdbase-eth-20vhost-10vm


Workloads:

 VNF based on Linux relying kernel virtio driver - kernel linux bridge, 
kernel L3/IPv4 routing

 IPv4/v6 VPP vRouter

[Irene] For the VNF workloads, we need to brainstorm and include real workload 
applications to test to provide a better understanding in performance for real 
NFV/SDN deployment. Yes these workloads listed above would be a good baseline 
number. I suggest we should start to brainstorm and discuss for other real 
representative workload for the Telco/datacenter deployment which we can later 
incorporate into CSIT.
For example, some of the workloads that can be a good candidate are IPSec, 
Firewall, webserver SSL, etc.  

Did I leave out anything?
...
--
*Thomas F Herbert*
SDN Group
Office of Technology
*Red Hat*
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread Vanessa Valderrama via RT
Review this issue with Ed.  This is caused by repeatedly pushing vpp-dpdk-dev 
with the same version.  Please let me know if any other action is needed on 
this ticket.

Thank you,
Vanessa

On Tue Feb 28 06:29:30 2017, mgrad...@cisco.com wrote:
> Dear all,
> 
> after quick look at the logs,
> it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom
> deploy failure.
> 
> As a consequence hc2vpp integration jobs are not triggered, which
> causes honeycomb packages
> being out of synch much more often.
> 
> The issue was also reported by our team as #37115 some time ago...
> 
> Regards,
> Marek
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> On Behalf Of Dave Wallace
> Sent: 16 lutego 2017 04:17
> To: helpd...@fd.io; Ed Warnicke 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing
> 
> Dear helpd...@fd.io,
> 
> All of the vpp-merge-master-{os} jobs have been returning 'failure'
> for the past 8 days:
> 
> [cid:image001.png@01D291BC.8904FB90]
> 
> This is causing the downstream vpp-docs-merge-master and vpp-make-
> test-docs-merge-master to NOT be scheduled.
> 
> Why are the docs merge jobs triggered off of the successful completion
> of 'vpp-merge-master-ubuntu1404' instead of being triggered by
> 'gerrit-trigger-patch-merged' and run in parallel with the other merge
> jobs?
> 
> Also, why are the docs merge jobs being triggered off of the vpp
> ubuntu1404 merge job when the os-parameter is being set to
> 'ubuntu1604'?
> 
> Thanks,
> -daw-



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] fd.io CSIT vhost-user test scenario implementation priorities

2017-02-28 Thread Alec Hothan (ahothan)

Comments inline…


From: "Liew, Irene" 
Date: Tuesday, February 28, 2017 at 10:25 AM
To: Thomas F Herbert , "csit-...@lists.fd.io" 
, "Maciek Konstantynowicz (mkonstan)" 
Cc: vpp-dev , "Pierre Pfister (ppfister)" 
, "Alec Hothan (ahothan)" , Karl Rister 
, Douglas Shakshober , Andrew Theurer 
, "Liew, Irene" 
Subject: RE: fd.io CSIT vhost-user test scenario implementation priorities

Here are my thoughts and comments on the topologies/test and workloads for CSIT 
vhost-user test scenarios. Pls refer to my comments inline below.

-Original Message-
From: Thomas F Herbert [mailto:therb...@redhat.com]
Sent: Monday, February 27, 2017 10:04 AM
To: csit-...@lists.fd.io; Maciek Konstantynowicz 
(mkonstan) mailto:mkons...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>; Liew, Irene 
mailto:irene.l...@intel.com>>; Pierre Pfister (ppfister) 
mailto:ppfis...@cisco.com>>; Alec Hothan (ahothan) 
mailto:ahot...@cisco.com>>; Karl Rister 
mailto:kris...@redhat.com>>; Douglas Shakshober 
mailto:dsh...@redhat.com>>; Andrew Theurer 
mailto:atheu...@redhat.com>>
Subject: fd.io CSIT vhost-user test scenario implementation priorities

Please weigh in:

We are starting to plan fd.io CSIT Vhost-user test scenario priorities for 
implementation in 17.04 and  in 17.07 CSIT releases.

Vhost-user performance is critical for VNF acceptance in potential use cases 
for VPP/fd.io adaption.

We had previous email thread here:
https://lists.fd.io/pipermail/csit-dev/2016-November/001192.html along with a 
TWS https://wiki.fd.io/view/TWS meetings on 12/02/16 and 12/07/16
  summarized in this wiki:
https://wiki.fd.io/view/CSIT/vhostuser_test_scenarios

Topologies and tests

Current in 17.01:

10ge2p1x520-dot1q-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-dot1q-l2xcbase-eth-2vhost-1vm
10ge2p1x520-ethip4-ip4base-eth-2vhost-1vm
10ge2p1x520-ethip4vxlan-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2bdbasemaclrn-eth-2vhost-1vm
10ge2p1x520-eth-l2xcbase-eth-2vhost-1vm
10ge2p1x710-eth-l2bdbasemaclrn-eth-2vhost-1vm
40ge2p1xl710-eth-l2bdbasemaclrn-eth-2vhost-1vm

single and multi-queue

testing of pmd baseline

Proposed in links above
 1p1nic-dot1q-l2bdbase-eth-2vhost-1vm
 1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
 1p1nic-dot1q-l2bdbase-eth-4vhost-2vm-chain
 1p1nic-ethip4vxlan-l2bdbase-eth-4vhost-2vm-chain
 1p1nic-dot1q-l2bdbase-eth-2vhost-1vm-chain-2nodes
 1p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm-2nodes

[Irene] For the baseline testing on vhost-user, I would recommend to run core 
scaling from 1 core - max cores for 1 VM Phy-VM-Phy and 2 VMs PVVP. I know the 
current VPP v17.01  did not have the support to manually assign the vhost-user 
ports RXQ to specific cores to ensure load balancing across the cores. And from 
our experience in the lab, when I ran 3-core of work threads in 4vhost-2vm PVVP 
configuration, I observed the ports were unevenly distributed across 3 worker 
threads and VPP vNet suffered in performance scalability. If the manual RXQ 
assignment for vhost-user port feature will be made available in the next 17.04 
or 17.07 release, I strongly propose to include the core scaling of worker 
threads in order to evaluate the vhost-user RXQ core assignment feature. For 
example, we can pick 1 test case of 2 vhost-1vm  and run with configuration of 
1-core, 2-core and 4-core of worker threads. We then pick 1 test case of 
4vhost-2vm-chain and run with configuration of 1-core, 2-core, 3-core, 4-core 
and 6-core of worker threads.

To limit the number of test I suggest we use 1,2,4 physical cores. I don’t 
think there will be many deployments with 6 or more physical cores for the 
vswitch (but I’m only talking about openstack-NFV deployments).
One interesting variation is to test with hyper-thread and without: no 
hyper-thread = 1,2,4 VPP worker threads (mapped on as many full phys cores), 
with hyper-thread = 2,4,8 worker threads using sibling native threads) and 
check if we can find the same kind of linearity as Karl Rister (if you missed 
the other email thread.




Proposed topologies for OpenStack from links above:

2p1nic-dot1q-l2bdscale-flows-eth-vhost-vm-chain
2p1nic-ethip4vxlan-l2bdscale-flows-eth-vhost-vm-chain

New scenarios Proposed:

Primary Overlay vxlan and VTEP

 2p1nic-ethip4vxlan-l2bdbase-eth-2vhost-1vm
 2p1nic-ethip4vxlan-l2bdbase-eth-20vhost-10vm

[Irene] There is a trend in the industry using IPv6 over VXLAN. Shall we 
include IPv6 VXLAN scenario too?

Do you mean IPv6 inside VxLAN tunnels or VxLAN tunnels using IPv6 UDP addresses?
The term ethip4vxlan means VxLAN tunnels using IPv4 UDP addresses.
I’m not sure many people are using IPv6 for the VxLAN overlay itself.



MPLS over Ethernet

Scaling Multiple VMs

 2p1nic-dot1q-l2bdbase-eth-20vhost-10vm


Workloads:

 VNF based on Linux relying kernel virtio driver - kernel linux bridge, 
kernel L3/IPv4 routing

 IPv4/v6 VPP vRouter

[Irene] For the VNF workloads, we need to brainstorm and

[vpp-dev] Tracking Starter Tasks

2017-02-28 Thread Ed Warnicke
I've had a number of people ask "What can I pick up as a starter task to
start contributing".

I've started using lable "StarterTask" (and leaving unassigned) ideas for
starter tasks.

You can see them on a filter here:

https://jira.fd.io/issues/?filter=11008

Please contribute Jiras for 'StarterTasks' you might have in mind in Jira,
adding label 'StarterTask' to them and leave them unassigned and then we
will have a filter of tasks folks can pick up.

Ed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread mgrad...@cisco.com via RT
Current hc2vpp jobs expect vpp jobs to be successful (vpp jobs are triggers).

Ed: would it be possible to ignore vpp-dpdk-dev deploy failure?

If not - this is something we can live with (perhaps hc2vpp jobs need some 
hardening).

Found one more issue (looks like it might be related):
Most current vpp ub16 package is 12days old:
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp/

Ub14 and centos7 are ok. Any idea why is that happening?

This is a bigger problem for our integration jobs (we are discussing workaround 
now).

Regards,
Marek

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: 28 lutego 2017 20:49
To: dwallac...@gmail.com
Cc: csit-...@lists.fd.io; hagb...@gmail.com; Jan Srnicek -X (jsrnicek - 
PANTHEON TECHNOLOGIES at Cisco) ; Marek Gradzki -X 
(mgradzki - PANTHEON TECHNOLOGIES at Cisco) ; Peter Lapos 
-X (plapos - PANTHEON TECHNOLOGIES at Cisco) ; 
vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

Review this issue with Ed.  This is caused by repeatedly pushing vpp-dpdk-dev 
with the same version.  Please let me know if any other action is needed on 
this ticket.

Thank you,
Vanessa

On Tue Feb 28 06:29:30 2017, mgrad...@cisco.com wrote:
> Dear all,
> 
> after quick look at the logs,
> it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom 
> deploy failure.
> 
> As a consequence hc2vpp integration jobs are not triggered, which 
> causes honeycomb packages being out of synch much more often.
> 
> The issue was also reported by our team as #37115 some time ago...
> 
> Regards,
> Marek
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> On Behalf Of Dave Wallace
> Sent: 16 lutego 2017 04:17
> To: helpd...@fd.io; Ed Warnicke 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing
> 
> Dear helpd...@fd.io,
> 
> All of the vpp-merge-master-{os} jobs have been returning 'failure'
> for the past 8 days:
> 
> [cid:image001.png@01D291BC.8904FB90]
> 
> This is causing the downstream vpp-docs-merge-master and vpp-make- 
> test-docs-merge-master to NOT be scheduled.
> 
> Why are the docs merge jobs triggered off of the successful completion 
> of 'vpp-merge-master-ubuntu1404' instead of being triggered by 
> 'gerrit-trigger-patch-merged' and run in parallel with the other merge 
> jobs?
> 
> Also, why are the docs merge jobs being triggered off of the vpp
> ubuntu1404 merge job when the os-parameter is being set to 
> 'ubuntu1604'?
> 
> Thanks,
> -daw-




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Current hc2vpp jobs expect vpp jobs to be successful (vpp jobs are triggers).

Ed: would it be possible to ignore vpp-dpdk-dev deploy failure?

If not - this is something we can live with (perhaps hc2vpp jobs need some 
hardening).

Found one more issue (looks like it might be related):
Most current vpp ub16 package is 12days old:
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp/

Ub14 and centos7 are ok. Any idea why is that happening?

This is a bigger problem for our integration jobs (we are discussing workaround 
now).

Regards,
Marek

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: 28 lutego 2017 20:49
To: dwallac...@gmail.com
Cc: csit-...@lists.fd.io; hagb...@gmail.com; Jan Srnicek -X (jsrnicek - 
PANTHEON TECHNOLOGIES at Cisco) ; Marek Gradzki -X 
(mgradzki - PANTHEON TECHNOLOGIES at Cisco) ; Peter Lapos 
-X (plapos - PANTHEON TECHNOLOGIES at Cisco) ; 
vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

Review this issue with Ed.  This is caused by repeatedly pushing vpp-dpdk-dev 
with the same version.  Please let me know if any other action is needed on 
this ticket.

Thank you,
Vanessa

On Tue Feb 28 06:29:30 2017, mgrad...@cisco.com wrote:
> Dear all,
> 
> after quick look at the logs,
> it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom 
> deploy failure.
> 
> As a consequence hc2vpp integration jobs are not triggered, which 
> causes honeycomb packages being out of synch much more often.
> 
> The issue was also reported by our team as #37115 some time ago...
> 
> Regards,
> Marek
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> On Behalf Of Dave Wallace
> Sent: 16 lutego 2017 04:17
> To: helpd...@fd.io; Ed Warnicke 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing
> 
> Dear helpd...@fd.io,
> 
> All of the vpp-merge-master-{os} jobs have been returning 'failure'
> for the past 8 days:
> 
> [cid:image001.png@01D291BC.8904FB90]
> 
> This is causing the downstream vpp-docs-merge-master and vpp-make- 
> test-docs-merge-master to NOT be scheduled.
> 
> Why are the docs merge jobs triggered off of the successful completion 
> of 'vpp-merge-master-ubuntu1404' instead of being triggered by 
> 'gerrit-trigger-patch-merged' and run in parallel with the other merge 
> jobs?
> 
> Also, why are the docs merge jobs being triggered off of the vpp
> ubuntu1404 merge job when the os-parameter is being set to 
> 'ubuntu1604'?
> 
> Thanks,
> -daw-



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread mgrad...@cisco.com via RT
+ Ed

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: 1 marca 2017 08:07
To: fdio-helpd...@rt.linuxfoundation.org; dwallac...@gmail.com
Cc: Peter Lapos -X (plapos - PANTHEON TECHNOLOGIES at Cisco) 
; csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins 
Jobs are failing

Current hc2vpp jobs expect vpp jobs to be successful (vpp jobs are triggers).

Ed: would it be possible to ignore vpp-dpdk-dev deploy failure?

If not - this is something we can live with (perhaps hc2vpp jobs need some 
hardening).

Found one more issue (looks like it might be related):
Most current vpp ub16 package is 12days old:
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp/

Ub14 and centos7 are ok. Any idea why is that happening?

This is a bigger problem for our integration jobs (we are discussing workaround 
now).

Regards,
Marek

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: 28 lutego 2017 20:49
To: dwallac...@gmail.com
Cc: csit-...@lists.fd.io; hagb...@gmail.com; Jan Srnicek -X (jsrnicek - 
PANTHEON TECHNOLOGIES at Cisco) ; Marek Gradzki -X 
(mgradzki - PANTHEON TECHNOLOGIES at Cisco) ; Peter Lapos 
-X (plapos - PANTHEON TECHNOLOGIES at Cisco) ; 
vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

Review this issue with Ed.  This is caused by repeatedly pushing vpp-dpdk-dev 
with the same version.  Please let me know if any other action is needed on 
this ticket.

Thank you,
Vanessa

On Tue Feb 28 06:29:30 2017, mgrad...@cisco.com wrote:
> Dear all,
> 
> after quick look at the logs,
> it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom 
> deploy failure.
> 
> As a consequence hc2vpp integration jobs are not triggered, which 
> causes honeycomb packages being out of synch much more often.
> 
> The issue was also reported by our team as #37115 some time ago...
> 
> Regards,
> Marek
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> On Behalf Of Dave Wallace
> Sent: 16 lutego 2017 04:17
> To: helpd...@fd.io; Ed Warnicke 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing
> 
> Dear helpd...@fd.io,
> 
> All of the vpp-merge-master-{os} jobs have been returning 'failure'
> for the past 8 days:
> 
> [cid:image001.png@01D291BC.8904FB90]
> 
> This is causing the downstream vpp-docs-merge-master and vpp-make- 
> test-docs-merge-master to NOT be scheduled.
> 
> Why are the docs merge jobs triggered off of the successful completion 
> of 'vpp-merge-master-ubuntu1404' instead of being triggered by 
> 'gerrit-trigger-patch-merged' and run in parallel with the other merge 
> jobs?
> 
> Also, why are the docs merge jobs being triggered off of the vpp
> ubuntu1404 merge job when the os-parameter is being set to 
> 'ubuntu1604'?
> 
> Thanks,
> -daw-



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

2017-02-28 Thread Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
+ Ed

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: 1 marca 2017 08:07
To: fdio-helpd...@rt.linuxfoundation.org; dwallac...@gmail.com
Cc: Peter Lapos -X (plapos - PANTHEON TECHNOLOGIES at Cisco) 
; csit-...@lists.fd.io; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins 
Jobs are failing

Current hc2vpp jobs expect vpp jobs to be successful (vpp jobs are triggers).

Ed: would it be possible to ignore vpp-dpdk-dev deploy failure?

If not - this is something we can live with (perhaps hc2vpp jobs need some 
hardening).

Found one more issue (looks like it might be related):
Most current vpp ub16 package is 12days old:
https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp/

Ub14 and centos7 are ok. Any idea why is that happening?

This is a bigger problem for our integration jobs (we are discussing workaround 
now).

Regards,
Marek

-Original Message-
From: Vanessa Valderrama via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: 28 lutego 2017 20:49
To: dwallac...@gmail.com
Cc: csit-...@lists.fd.io; hagb...@gmail.com; Jan Srnicek -X (jsrnicek - 
PANTHEON TECHNOLOGIES at Cisco) ; Marek Gradzki -X 
(mgradzki - PANTHEON TECHNOLOGIES at Cisco) ; Peter Lapos 
-X (plapos - PANTHEON TECHNOLOGIES at Cisco) ; 
vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #36808] vpp-merge-master-{os} Jenkins Jobs are failing

Review this issue with Ed.  This is caused by repeatedly pushing vpp-dpdk-dev 
with the same version.  Please let me know if any other action is needed on 
this ticket.

Thank you,
Vanessa

On Tue Feb 28 06:29:30 2017, mgrad...@cisco.com wrote:
> Dear all,
> 
> after quick look at the logs,
> it looks like the issue is caused by vpp-dpdk-dev-17.02-vpp1_amd64.pom 
> deploy failure.
> 
> As a consequence hc2vpp integration jobs are not triggered, which 
> causes honeycomb packages being out of synch much more often.
> 
> The issue was also reported by our team as #37115 some time ago...
> 
> Regards,
> Marek
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io]
> On Behalf Of Dave Wallace
> Sent: 16 lutego 2017 04:17
> To: helpd...@fd.io; Ed Warnicke 
> Cc: csit-...@lists.fd.io; vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp-merge-master-{os} Jenkins Jobs are failing
> 
> Dear helpd...@fd.io,
> 
> All of the vpp-merge-master-{os} jobs have been returning 'failure'
> for the past 8 days:
> 
> [cid:image001.png@01D291BC.8904FB90]
> 
> This is causing the downstream vpp-docs-merge-master and vpp-make- 
> test-docs-merge-master to NOT be scheduled.
> 
> Why are the docs merge jobs triggered off of the successful completion 
> of 'vpp-merge-master-ubuntu1404' instead of being triggered by 
> 'gerrit-trigger-patch-merged' and run in parallel with the other merge 
> jobs?
> 
> Also, why are the docs merge jobs being triggered off of the vpp
> ubuntu1404 merge job when the os-parameter is being set to 
> 'ubuntu1604'?
> 
> Thanks,
> -daw-



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev