On Thu, Dec 3, 2020 at 6:33 PM Richard Sandiford <richard.sandif...@arm.com> wrote: > > Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes: > > On Tue, Nov 24, 2020 at 4:47 PM Qing Zhao <qing.z...@oracle.com> wrote: > >> Another issue is, in order to check whether an auto-variable has > >> initializer, I plan to add a new bit in “decl_common” as: > >> /* In a VAR_DECL, this is DECL_IS_INITIALIZED. */ > >> unsigned decl_is_initialized :1; > >> > >> /* IN VAR_DECL, set when the decl is initialized at the declaration. */ > >> #define DECL_IS_INITIALIZED(NODE) \ > >> (DECL_COMMON_CHECK (NODE)->decl_common.decl_is_initialized) > >> > >> set this bit when setting DECL_INITIAL for the variables in FE. then keep > >> it > >> even though DECL_INITIAL might be NULLed. > > > > For locals it would be more reliable to set this flag during gimplification. > > > >> Do you have any comment and suggestions? > > > > As said above - do you want to cover registers as well as locals? I'd do > > the actual zeroing during RTL expansion instead since otherwise you > > have to figure youself whether a local is actually used (see > > expand_stack_vars) > > > > Note that optimization will already made have use of "uninitialized" state > > of locals so depending on what the actual goal is here "late" may be too > > late. > > Haven't thought about this much, so it might be a daft idea, but would a > compromise be to use a const internal function: > > X1 = .DEFERRED_INIT (X0, INIT) > > where the X0 argument is an uninitialised value and the INIT argument > describes the initialisation pattern? So for a decl we'd have: > > X = .DEFERRED_INIT (X, INIT) > > and for an SSA name we'd have: > > X_2 = .DEFERRED_INIT (X_1(D), INIT) > > with all other uses of X_1(D) being replaced by X_2. The idea is that: > > * Having the X0 argument would keep the uninitialised use of the > variable around for the later warning passes. > > * Using a const function should still allow the UB to be deleted as dead > if X1 isn't needed. > > * Having a function in the way should stop passes from taking advantage > of direct uninitialised uses for optimisation. > > This means we won't be able to optimise based on the actual init > value at the gimple level, but that seems like a fair trade-off. > AIUI this is really a security feature or anti-UB hardening feature > (in the sense that users are more likely to see predictable behaviour > “in the field” even if the program has UB).
The question is whether it's in line of peoples expectation that explicitely zero-initialized code behaves differently from implicitely zero-initialized code with respect to optimization and secondary side-effects (late diagnostics, latent bugs, etc.). Introducing a new concept like .DEFERRED_INIT is much more heavy-weight than an explicit zero initializer. As for optimization I fear you'll get a load of redundant zero-init actually emitted if you can just rely on RTL DSE/DCE to remove it. Btw, I don't think theres any reason to cling onto clangs semantics for a particular switch. We'll never be able to emulate 1:1 behavior and our -Wuninit behavior is probably wastly different already. Richard. > Thanks, > Richard