https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98678
--- Comment #8 from ro at CeBiTec dot Uni-Bielefeld.DE <ro at CeBiTec dot Uni-Bielefeld.DE> --- > --- Comment #1 from Jonathan Wakely <redi at gcc dot gnu.org> --- > This test is a bit tricky. The whole point is to check that performance of one > operation is acceptable compared to a baseline. But the definition of > "acceptable" and the relative difference between the speed of the different > operations varies with arch. We could just increase the tolerances, but then > we > allow worse performance on the targets that don't need it. Maybe we want to > change the 30 and 100 magic numbers to depend on the target. I've made some more checks on Solaris now. The test consistently PASSes on Solaris/SPARC, both 32 and 64-bit. However, on Solaris/x86 the failure is just as reliable, e.g. /vol/gcc/src/hg/master/local/libstdc++-v3/testsuite/30_threads/future/members/poll.cc:132: int main(): Assertion 'wait_until_sys_min < (ready * 100)' failed. wait_for(0s): 3674ns for 200 calls, avg 18.37ns per call wait_until(system_clock minimum): 419918ns for 200 calls, avg 2099.59ns per call wait_until(steady_clock minimum): 459775ns for 200 calls, avg 2298.88ns per call wait_until(system_clock epoch): 1117280ns for 200 calls, avg 5586.4ns per call wait_until(steady_clock epoch: 956073ns for 200 calls, avg 4780.36ns per call wait_for when ready: 3194ns for 200 calls, avg 15.97ns per call It also makes no difference if the system is under full load or completely idle. I've also checked a wider range of systems/CPUs: host 32-bit 64-bit nahe 1.31 1.40 2.60 GHz Xeon Gold 6132 lokon 1.43 1.66 3.10 GHz Core i5-2400 itzacchiuatl 0.69 1.53 3.20 GHz Core i7-8700 manam 0.89 2.22 3.50 GHz Xeon E3-1245 lucy 0.54 0.59 2.00 GHz Xeon E7-4850 The attached patch uses a scale factor of 2.5 to accomodate this.