Hi,

We spotted a small packet throughput regression in vpp master branch
affecting all perf test  cases trended in CSIT by job [1]. All tests
reported results data lower by 0.3 .. 0.5 Mpps.

job_id  vpp_version             job_date    job_start_time  job_duration
#83     17.10-rc0~42-g28160f3   10-Jul      19:43 UTC       12hr
#84     17.10-rc0~48-g2c25a62   11-Jul      19:45 UTC       12hr

Related Jenkins plugin plotted graphs are at [2].
It could be just an outlier, but we want to give everyone heads-up.
We will monitor the situation and provide updates on this thread.
vpp master git log for 17.10-rc0~42..17.10-rc0~48 is provided at [3].

Welcome any ideas about what could have caused the performance diff
between rc0~42 and rc0~48..

On a side note: The last vpp master trending job #85 yielded bit better
results compared to #84, and this is due to the single patch
17.10-rc0~48..17.10-rc0~49-g690d26c done by Damjan to address CPU stalls
by prefetching 2nd cacheline of rte_mbuf during tx with dpdk, [4]. 

-Maciek

---
[1] jenkins job page:
    - https://jenkins.fd.io/view/csit/job/csit-vpp-perf-trend-daily-master/

---
[2] jenkins plugin trending graphs (yes we know they're ugly - need to invest 
time to code better ones):
    - 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-trend-daily-master/plot/RFC2544%3A%20IPv4%20base,%20scale,%20feature/
    - 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-trend-daily-master/plot/RFC2544%3A%20IPv6%20base,%20scale,%20feature/
    - 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-trend-daily-master/plot/RFC2544%3A%20Overlay%20tunnels%20-%20LISP,%20VXLAN,%20GPE,%20GRE/
    - 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-trend-daily-master/plot/RFC2544%3A%20Xconnect%20and%20Bridge%20Domain/

---
[3] git log 28160f3..2c25a62

commit 2c25a62cc1cc4937165de740a3b32d78429c72d6
Author: Dave Barach <dbar...@cisco.com>
Date:   Mon Jun 26 11:35:07 2017 -0400
 
    Horizontal (nSessions) scaling draft
    
    - Data structure preallocation.
    - Input state machine fixes for mid-stream 3-way handshake retries.
    - Batch connections in the builtin_client
    - Multiple private fifo segment support
    - Fix elog simultaneous event type registration
    - Fix sacks when segment hole is added after highest sacked
    - Add "accepting" session state for sessions pending accept
    - Add ssvm non-recursive locking
    - Estimate RTT for syn-ack
    - Don't init fifo pointers. We're using relative offsets for ooo
      segments
    - CLI to dump individual session
    
    Change-Id: Ie0598563fd246537bafba4feed7985478ea1d415
    Signed-off-by: Dave Barach <dbar...@cisco.com>
    Signed-off-by: Florin Coras <fco...@cisco.com>
 
commit 8af1b2fdecc883eadfec6b91434adc6044e24cb2
Author: Eyal Bari <eb...@cisco.com>
Date:   Tue Jul 11 14:24:37 2017 +0300
 
    L2INPUT:fix features mask cailculation
    
    Change-Id: I84cea7530b01302a0adeef95b4924f54dc2e41ec
    Signed-off-by: Eyal Bari <eb...@cisco.com>
 
commit e1f08898aed2dbc91115205959821f93bb821d34
Author: Damjan Marion <damar...@cisco.com>
Date:   Tue Jul 11 12:05:06 2017 +0200
 
    memif: avoid double buffer free
    
    Change-Id: I902f54618c4e1f649af11497c1cb10922e43755a
    Signed-off-by: Damjan Marion <damar...@cisco.com>
 
commit 3bb5baf9f25cbb7726f003e4c1f419dadadcab96
Author: Matus Fabian <matfa...@cisco.com>
Date:   Sun Jul 9 23:31:41 2017 -0700
 
    SNAT: fixed bug in fallback to 3-tuple key for non TCP/UDP sessions
    
    Change-Id: I1c4d5f92ec841b1cfe1a33eab4bb94e4001d0411
    Signed-off-by: Matus Fabian <matfa...@cisco.com>
 
commit 75e2f2ac39871554c05a9a240ac26a6028ee3e99
Author: Eyal Bari <eb...@cisco.com>
Date:   Mon Jul 10 10:12:13 2017 +0300
 
    API:fix arp/ND event messages - remove context
    
    context causes the message to be treated as a reply by the python API
    
    Change-Id: Icf4d051a69f5a2cb9be5879accfe030ebcd650a8
    Signed-off-by: Eyal Bari <eb...@cisco.com>
 
commit 04a7f05e91e919f51eaecaee476435484076655b
Author: Damjan Marion <damar...@cisco.com>
Date:   Mon Jul 10 15:06:17 2017 +0200
 
    vlib: store buffer memory information in the buffer_main
    
    Currently, buffer index is calculated as a offset to the physmem
    region shifted by log2_cacheline size.
    
    When DPDK is used we "hack" physmem data with information taken from
    dpdk mempool. This makes physmem code not usable with DPDK.
    
    This change makes buffer memory start and size independent of physmem
    basically allowing physmem to be used when DPDK plugin is loaded.
    
    Change-Id: Ieb399d398f147583b9baab467152a352d58c9c31
   Signed-off-by: Damjan Marion <damar...@cisco.com>

---
[4] git log 2c25a62..690d26c
 
commit 690d26c6b9ddbd1a252e0eff61a28a62fc740432
Author: Damjan Marion <damar...@cisco.com>
Date:   Tue Jul 11 17:13:37 2017 +0200
 
    dpdk: prefetch 2nd cacheline of rte_mbuf during tx
    
    Change-Id: I0db02dd0147dbd47d4296fdb84280d0e7d321f3c
    Signed-off-by: Damjan Marion <damar...@cisco.com>

---
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to