> is distingushing performance differences caused by environment The main contribution has been fixed, start clicking here [2] if interested.
> We will probably create separate page for the new detection, It took a while, as I believe some tests are still not reliable enough, but done [3]. (You need the "index.html" part, otherwise you will be redirected to regular trending graphs.) Let me know whether the new graphs are useful, or whether should I try to find a middle ground sensitivity wise. Personally, I think we should investigate the less reliable tests, while keeping the sensitivity high. (But I am not the one examining each possible regression.) Vratko. [2] https://lists.fd.io/g/csit-dev/message/3247 [3] https://docs.fd.io/csit/master/trending/new/index.html From: csit-...@lists.fd.io <csit-...@lists.fd.io> On Behalf Of Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io Sent: Friday, 2018-September-21 13:02 To: vpp-dev@lists.fd.io Cc: csit-...@lists.fd.io Subject: Re: [csit-dev] anomaly detection changes Quick update, we are stuck. > the test is now executing 10 trials of 1 second > so in theory nothing should change No big changes were visible, so we have kept this change. > to take standard deviations into account > to detect smaller regressions and progressions. But we are not going ahead with the second change. The reason is with that, the anomaly detection algorithm is distingushing performance differences caused by environment (as opposed to VPP code effectiveness) in most tests. Attached part of graph is showing the changed behavior. Most of graphs are affected by this, but most of the time it is less obvious whether the frequent anomalies are related to VPP. We will probably create separate page for the new detection, but we will keep the old page as default for now. Vratko. From: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) Sent: Monday, 2018-September-10 14:06 To: 'vpp-dev@lists.fd.io' <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> Cc: 'csit-dev' <csit-...@lists.fd.io<mailto:csit-...@lists.fd.io>> Subject: anomaly detection changes Hello people watching the trending pages [0]. We have just merged a Change [1] which affects how MRR tests work. Instead of single trial of 10 second duration, the test is now executing 10 trials of 1 second. PAL is told to aggregate the results, so in theory nothing should change, but in practice the anomaly detection could spot some regressions or progressions. If the anomalies are big, we will revert. If they are small (or none), we will tell PAL (next week) to take standard deviations into account, which will definitely make anomaly detection to mark many regressions and progressions (just because it is not easy to have the detection algorithm compatible with several trial durations at once). After that, the anomaly detection should be able to detect smaller regressions and progressions. If it starts detecting more noise than signal, we will revert (and try something else). Vratko. [0] https://docs.fd.io/csit/master/trending/introduction/dashboard.html [1] https://gerrit.fd.io/r/14596
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#11723): https://lists.fd.io/g/vpp-dev/message/11723 Mute This Topic: https://lists.fd.io/mt/28809880/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-