So it appears to me that what commit 6bf0bc842 did in this area was not just wrong, but disastrously so. Before that, we had a macro that evaluated isinf(val) before it evaluated the inf_is_valid condition. Now we have check_float[48]_val which do it the other way around. That would be okay if the inf_is_valid condition were cheap to evaluate, but in common code paths it's actually twice as expensive as isinf().
Andres seems to be of the opinion that the compiler should be willing to ignore the semantic requirements of the C standard in order to rearrange the code back into the cheaper order. That sounds like wishful thinking to me ... even if it actually works on his compiler, it certainly isn't going to work for everyone. The patch looks unduly invasive to me, but I think that it might be right that we should go back to a macro-based implementation, because otherwise we don't have a good way to be certain that the function parameter won't get evaluated first. (Another reason to do so is so that the file/line numbers generated for the error reports go back to being at least a little bit useful.) We could use local variables within the macro to avoid double evals, if anyone thinks that's actually important --- I don't. I think the current code is probably also misusing unlikely(), and that the right way would be more like if (unlikely(isinf(val) && !(inf_is_valid))) regards, tom lane