Hi! On Tue, Jun 24, 2025 at 09:32:58AM +0100, David Laight wrote: > > So GCC uses the 'unlikely' variant of the branch instruction to force > > the correct prediction, doesn't it ? > > Nope... > Most architectures don't have likely/unlikely variants of branches.
In GCC, "likely" means 80%. "Very likely" means 99.95%. Most things get something more appropriate than such coarse things predicted. Most of the time GCC uses these predicted branch probabilities to lay out code in such a way that the fall-through path is the expected one. Target backends can do special things with it as well, but usually that isn't necessary. There are many different predictors. GCC usually can predict things not bad by just looking at the shape of the code, using various heuristics. Things like profile-guided optimisation allow to use a profile from an actual execution to optimise the code such that it will work faster (assuming that future executions of the code will execute similarly!) You also can use __builtin_expect() in the source code, to put coarse static prediction in. That is what the kernel "{un,}likely" macros do. If the compiler knows some branch is not very predictable, it can optimise the code knowing that. Like, it could use other strategies than conditional branches. On old CPUs something like "this branch is taken 50% of the time" makes it a totally unpredictable branch. But if say it branches exactly every second time, it is 100% predicted correctly by more advanced predictors, not just 50%. To properly model modern branch predictors we need to record a "how predictable is this branch" score as well for every branch, not just a "how often does it branch instead of falling through" score. We're not there yet. Segher