Hi, > > With modern macOS and modern hardware there are performance effects that > you can't control, perhaps most significantly thermal throttling, but > also background jobs like Time Machine backups and iCloud sync. > > You need to apply a little bit of statistical theory. Standard error > increases with variance, but can be decreased by increasing the sample > size. Calculate what sample size is needed to bring the standard error > down to an acceptable level. >
I will try to get more samples this week. [1]. Till now I made so many changes to code that I had to reconduct tests. Also, this seems to be right that many ports need to be tested. Even if I test a port 5 times in a row(everything else closed) and then I again conduct testing 5 times in a row for the same port some other time, the results vary pretty nicely. Generally that applies mostly for real time. sys and user time show differences, but not that big. Surprisingly, while testing this week I noticed __darwintrace_setup() took more time than __darwintrace_is_in_sandbox(). Generally I was thinking registry querying is the slowest of all but as per the times that I printed, it turns out that wasn’t the case. > > > I don't think there is much value in going to so much effort to ensure a > completely cold cache before starting. Close other apps yes, but don't > reboot or reinstall everything. Since we don't have control over disk > cache misses, and a warm cache is more likely in the real world anyway, > it's common practice to pre-warm the cache by doing one run before you > start recording results. > > [1] https://docs.google.com/spreadsheets/d/1ksj3Fex-AnTEU4f4IRzwUkTpN4XfUye-HqSdZwXOsKs/edit#gid=0