Hi! On Thu, Mar 03, 2022 at 10:11:32AM +0800, Kewen.Lin wrote: > As PR103623 shows, it's a regression failure due to new built-in > function framework, previously we guard __builtin_{un,}pack_{longdouble, > ibm128} built-in functions under hard float, so they are unavailable > with the given configuration. While with new bif infrastructure, it > becomes available and gets ICE due to incomplete supports. > > Segher and Peter pointed out that we should make them available with > soft float, I agree we can extend them to cover both soft and hard > float. But considering it's stage 4 now and this regression is > classified as P1, also the previous behavior requiring hard float > aligns with what document [1] says, I guess it may be a good idea to > fix it with the attached small patch to be consistent with the previous > behavior. Then we can extend the functionality in upcoming stage 1.
Or you could just not take away the existing functionality. What makes it ICE on (at least some configurations of) 32-bit now? Can you exclude just 32-bit soft float? Segher