Re: [gdal-dev] Performance regression testing/benchmarking for CI

2023-10-15 Thread Even Rouault via gdal-dev
Le 15/10/2023 à 13:34, Javier Jimenez Shaw via gdal-dev a écrit : Hi Even. Thanks, it sounds good. However I see a potential problem. I see that you use once "SetCacheMax". We should not forget about that in the future for sensible tests. The cache of gdal is usually a percentage of the total

Re: [gdal-dev] Performance regression testing/benchmarking for CI

2023-10-15 Thread Javier Jimenez Shaw via gdal-dev
Hi Even. Thanks, it sounds good. However I see a potential problem. I see that you use once "SetCacheMax". We should not forget about that in the future for sensible tests. The cache of gdal is usually a percentage of the total memory, that may change among the environments and time. On Wed, 11 Oc

Re: [gdal-dev] Performance regression testing/benchmarking for CI

2023-10-10 Thread Laurențiu Nicola via gdal-dev
Hi, No experience with pytest-benchmark, but I maintain an unrelated project that runs some benchmarks on CI, and here are some things worth mentioning: - we store the results as a newline-delimited JSON file in a different GitHub repository (https://raw.githubusercontent.com/rust-analyzer/me

Re: [gdal-dev] Performance regression testing/benchmarking for CI

2023-10-10 Thread Daniel Evans via gdal-dev
Hi Even, > With virtualization, it is hard to guarantee that other things happening on the host running the VM might not interfer. Even locally on my own machine, I initially saw strong variations in timings The advice I've come across for benchmarking is to use the minimum time from the set of r