On Sun, Jun 9, 2019 at 6:01 PM Mihir Luthra <1999mihir.lut...@gmail.com> wrote: > > Hi jan, > >> >> How exactly were these numbers obtained? Is it one run? >> An average of ten runs? All following a complete distclean? >> Is it the "real" time as reported by time(1) or somethin else? >> What are the other times reported by time(1), as in >> >> $ time sleep 5 >> 0m05.01s real 0m00.00s user 0m00.00s system > > > The time I sent that mail, I was mainly testing same ports repeatedly to rule > out the bugs. So I just took an average of them.
I would like to see the time for each run (if 10 runs, then 10 columns i.e., xx_run1, xx_run2, ...), rather than only average of them. Collect as much as insights we could, maybe we find some pattern or something that might help us (not sure what though). Since I have a feeling, we might get better performance in one run and not so good in another. I notice that some ports take less time with modification and some ports take more time. Just want to make sure it happens in multiple runs. In an ideal scenario, I would prefer a clean environment (can be a fresh install of OS, bare minimum apps running, reboot before each run, etc.) to run each test (containers maybe? or some CI like travis) which might not be affected by other processes running in background. -- Jackson Isaac