> On Dec 7, 2020, at 12:05 PM, Richard Sandiford <richard.sandif...@arm.com>
> wrote:
>
> Qing Zhao <qing.z...@oracle.com <mailto:qing.z...@oracle.com>> writes:
>>> On Dec 7, 2020, at 11:10 AM, Richard Sandiford <richard.sandif...@arm.com>
>>> wrote:
>>>>>>
>>>>>> Another issue is, in order to check whether an auto-variable has
>>>>>> initializer, I plan to add a new bit in “decl_common” as:
>>>>>> /* In a VAR_DECL, this is DECL_IS_INITIALIZED. */
>>>>>> unsigned decl_is_initialized :1;
>>>>>>
>>>>>> /* IN VAR_DECL, set when the decl is initialized at the declaration. */
>>>>>> #define DECL_IS_INITIALIZED(NODE) \
>>>>>> (DECL_COMMON_CHECK (NODE)->decl_common.decl_is_initialized)
>>>>>>
>>>>>> set this bit when setting DECL_INITIAL for the variables in FE. then
>>>>>> keep it
>>>>>> even though DECL_INITIAL might be NULLed.
>>>>>>
>>>>>>
>>>>>> For locals it would be more reliable to set this flag during
>>>>>> gimplification.
>>>>>>
>>>>>> Do you have any comment and suggestions?
>>>>>>
>>>>>>
>>>>>> As said above - do you want to cover registers as well as locals? I'd do
>>>>>> the actual zeroing during RTL expansion instead since otherwise you
>>>>>> have to figure youself whether a local is actually used (see
>>>>>> expand_stack_vars)
>>>>>>
>>>>>> Note that optimization will already made have use of "uninitialized"
>>>>>> state
>>>>>> of locals so depending on what the actual goal is here "late" may be too
>>>>>> late.
>>>>>>
>>>>>>
>>>>>> Haven't thought about this much, so it might be a daft idea, but would a
>>>>>> compromise be to use a const internal function:
>>>>>>
>>>>>> X1 = .DEFERRED_INIT (X0, INIT)
>>>>>>
>>>>>> where the X0 argument is an uninitialised value and the INIT argument
>>>>>> describes the initialisation pattern? So for a decl we'd have:
>>>>>>
>>>>>> X = .DEFERRED_INIT (X, INIT)
>>>>>>
>>>>>> and for an SSA name we'd have:
>>>>>>
>>>>>> X_2 = .DEFERRED_INIT (X_1(D), INIT)
>>>>>>
>>>>>> with all other uses of X_1(D) being replaced by X_2. The idea is that:
>>>>>>
>>>>>> * Having the X0 argument would keep the uninitialised use of the
>>>>>> variable around for the later warning passes.
>>>>>>
>>>>>> * Using a const function should still allow the UB to be deleted as dead
>>>>>> if X1 isn't needed.
>>>>>>
>>>>>> * Having a function in the way should stop passes from taking advantage
>>>>>> of direct uninitialised uses for optimisation.
>>>>>>
>>>>>> This means we won't be able to optimise based on the actual init
>>>>>> value at the gimple level, but that seems like a fair trade-off.
>>>>>> AIUI this is really a security feature or anti-UB hardening feature
>>>>>> (in the sense that users are more likely to see predictable behaviour
>>>>>> “in the field” even if the program has UB).
>>>>>>
>>>>>>
>>>>>> The question is whether it's in line of peoples expectation that
>>>>>> explicitely zero-initialized code behaves differently from
>>>>>> implicitely zero-initialized code with respect to optimization
>>>>>> and secondary side-effects (late diagnostics, latent bugs, etc.).
>>>>>>
>>>>>> Introducing a new concept like .DEFERRED_INIT is much more
>>>>>> heavy-weight than an explicit zero initializer.
>>>>>>
>>>>>>
>>>>>> What exactly you mean by “heavy-weight”? More difficult to implement or
>>>>>> much more run-time overhead or both? Or something else?
>>>>>>
>>>>>> The major benefit of the approach of “.DEFERRED_INIT” is to enable us
>>>>>> keep the current -Wuninitialized analysis untouched and also pass
>>>>>> the “uninitialized” info from source code level to “pass_expand”.
>>>>>
>>>>> Well, "untouched" is a bit oversimplified. You do need to handle
>>>>> .DEFERRED_INIT as not
>>>>> being an initialization which will definitely get interesting.
>>>>
>>>> Yes, during uninitialized variable analysis pass, we should specially
>>>> handle the defs with “.DEFERRED_INIT”, to treat them as uninitializations.
>>>
>>> Are you sure we need to do that? The point of having the first argument
>>> to .DEFERRED_INIT was that that argument would still provide an
>>> uninitialised use of the variable. And the values are passed and
>>> returned by value, so the lack of initialisation is explicit in
>>> the gcall itself, without knowing what the target function does.
>>>
>>> The idea is that we can essentially treat .DEFERRED_INIT as a normal
>>> (const) function call. I'd be surprised if many passes needed to
>>> handle it specially.
>>>
>>
>> Just checked with a small testing case (to emulate the .DEFERRED_INIT
>> approach):
>>
>> qinzhao@gcc10:~/Bugs/auto-init$ cat t.c
>> extern int DEFFERED_INIT (int, int) __attribute__ ((const));
>>
>> int foo (int n, int r)
>> {
>> int v;
>>
>> v = DEFFERED_INIT (v, 0);
>> if (n < 10)
>> v = r;
>>
>> return v;
>> }
>> qinzhao@gcc10:~/Bugs/auto-init$ sh t
>> /home/qinzhao/Install/latest_write/bin/gcc -O -Wuninitialized
>> -fdump-tree-all -S t.c
>> t.c: In function ‘foo’:
>> t.c:7:7: warning: ‘v’ is used uninitialized [-Wuninitialized]
>> 7 | v = DEFFERED_INIT (v, 0);
>> | ^~~~~~~~~~~~~~~~~~~~
>>
>> We can see that the current uninitialized variable analysis treats the new
>> added artificial initialization as the first use of the uninialized
>> variable. Therefore report the warning there.
>> However, we should report warning at “return v”.
>
> Ah, OK, so this is about the quality of the warning, rather than about
> whether we report a warning or not?
>
>> So, I think that we still need to specifically handle the new added
>> artificial initialization during uninitialized analysis phase.
>
> Yeah, that sounds like one approach. But if we're adding .DEFERRED_INIT
> in response to known uninitialised uses, two other approaches might be:
>
> (1) Give the call the same source location as one of the uninitialised uses.
>
> (2) Pass the locations of all uninitialised uses as additional arguments.
If we add .DEFERRED_INIT during gimplification phase, is the “uninitialized
uses” information available at that time?
Qing
>
> The uninit pass would then be picking the source location differently
> from normal, but I don't know what effect it would have on the quality
> of diagnostics. One obvious problem is that if there are multiple
> uninitialised uses, some of them might get optimised away later.
> On the other hand, using early source locations might give better
> results in some cases. I guess it will depend.
>
> Thanks,
> Richard