Labelling of regressions in Bugzilla

2021-12-15 Thread Jonathan Wakely via Gcc
On IRC we've been discussing some changes to Bugzilla that would give
a bit more structure to how we label and process regressions.

Currently we add something like "[9/10/11/12 Regression]" to the start
of the summary, and then edit that when it's fixed on a branch, when
forking a new release branch from trunk, and when the oldest branch is
closed and becomes unmaintained. Finding active regressions means a
free-text search in the summary field.

On IRC we discussed adding a new custom field to record which branches
a regression affects. This could be like the
known-towork/known-to-fail fields, so only allow predefined values,
and allow searching by "all of" and by "any of". The possible values
for the field would only be the major releases, not all the minor
releases and snapshots and vendor branches that are in the
known-to-work field. So just 4.9, 5, 6, 7 etc. not 4.9.4, 5.0, 5.1.0,
5.1.1 etc.

When a new branch is forked from trunk before a release all bugs that
have "trunk" in the regression field would automatically get "12"
added (or if we already used "12" instead of "trunk" they'd get "13"
added, either way would work).

Unlike the current system, we wouldn't need to remove closed branches
from the regressions field. We do that today so the Summary field
doesn't get cluttered with old branch info, but if it's in a separate
field keeping the old data present is valuable. We would only remove a
branch from that field when the regression is fixed on the branch. We
would still be able to search for regressions in active branches
easily, but it would also be possible to see at a glance that a given
regression was present on the gcc-8 branch and never fixed there. This
would also help vendors who maintain older branches, as the
information that a regression affected an old branch would not be
wiped out of the summary when the branch reaches EOL.

Jakub also suggested it would be nice to be able to enter a revision
into a "regressed at" field and have it auto-populate the regressions
list with all branches that the commit is present in. (Ideally any of
SVN r numbers, or git revisions, or gcc-descr rNN-NNN strings
could be used there). That would be useful when we bisect the
regression and find where it started.

Iain pointed out a drawback of not having the regression info in the
Summary. Currently it does draw your attention when looking at the
results of a bugzilla search. Andrew noted that bug aliases are
automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
shows its alias "(c++core-issues)". Maybe we could do that for
regressions (for the active branches only, so the result would be
similar to what we have today).

Thoughts? Objections? Better ideas?


Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Richard Earnshaw via Gcc




On 15/12/2021 11:39, Jonathan Wakely via Gcc wrote:

On IRC we've been discussing some changes to Bugzilla that would give
a bit more structure to how we label and process regressions.

Currently we add something like "[9/10/11/12 Regression]" to the start
of the summary, and then edit that when it's fixed on a branch, when
forking a new release branch from trunk, and when the oldest branch is
closed and becomes unmaintained. Finding active regressions means a
free-text search in the summary field.

On IRC we discussed adding a new custom field to record which branches
a regression affects. This could be like the
known-towork/known-to-fail fields, so only allow predefined values,
and allow searching by "all of" and by "any of". The possible values
for the field would only be the major releases, not all the minor
releases and snapshots and vendor branches that are in the
known-to-work field. So just 4.9, 5, 6, 7 etc. not 4.9.4, 5.0, 5.1.0,
5.1.1 etc.

When a new branch is forked from trunk before a release all bugs that
have "trunk" in the regression field would automatically get "12"
added (or if we already used "12" instead of "trunk" they'd get "13"
added, either way would work).

Unlike the current system, we wouldn't need to remove closed branches
from the regressions field. We do that today so the Summary field
doesn't get cluttered with old branch info, but if it's in a separate
field keeping the old data present is valuable. We would only remove a
branch from that field when the regression is fixed on the branch. We
would still be able to search for regressions in active branches
easily, but it would also be possible to see at a glance that a given
regression was present on the gcc-8 branch and never fixed there. This
would also help vendors who maintain older branches, as the
information that a regression affected an old branch would not be
wiped out of the summary when the branch reaches EOL.

Jakub also suggested it would be nice to be able to enter a revision
into a "regressed at" field and have it auto-populate the regressions
list with all branches that the commit is present in. (Ideally any of
SVN r numbers, or git revisions, or gcc-descr rNN-NNN strings
could be used there). That would be useful when we bisect the
regression and find where it started.

Iain pointed out a drawback of not having the regression info in the
Summary. Currently it does draw your attention when looking at the
results of a bugzilla search. Andrew noted that bug aliases are
automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
shows its alias "(c++core-issues)". Maybe we could do that for
regressions (for the active branches only, so the result would be
similar to what we have today).

Thoughts? Objections? Better ideas?



My immediate thought (since I tend to dislike deleting history) is why 
not have two fields?  One listing all the release branches where this 
has occurred and another for where it has now been fixed.  That way you 
can see quickly whether the regression has ever affected some versions 
of a release.  Something we lack today with the single fixed in field is 
the ability to track exactly which dot releases of each branch contained 
the fix for a regression.


Other than that, I have no other concerns at the moment.


Re: gccadmin hooks: make it a public git repo

2021-12-15 Thread Martin Liška

On 12/14/21 17:14, Jonathan Wakely wrote:

git clone gcc.gnu.org:/home/gccadmin/hooks-bin


Works for me as well, thanks!

So as Joseph mentioned, we are really prepared for the transition
as there's only the email_to.py script that can be ported.

Cheers,
Martin


Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Tobias Burnus

On 15.12.21 12:39, Jonathan Wakely via Gcc wrote:


Iain pointed out a drawback of not having the regression info in the
Summary. Currently it does draw your attention when looking at the
results of a bugzilla search. Andrew noted that bug aliases are
automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
shows its alias "(c++core-issues)".

Wouldn't it be easier to click on the "[Change Columns]" button at the
bottom of the search result page and add the new field to the "Selected
Columns"? The known-to-(work/fail) columns are available, i.e. this
feature also works with custom fields.

Tobias

-
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 
München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas 
Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht 
München, HRB 106955


Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Jonathan Wakely via Gcc
On Wed, 15 Dec 2021 at 12:15, Richard Earnshaw wrote:
>
>
>
> On 15/12/2021 11:39, Jonathan Wakely via Gcc wrote:
> > On IRC we've been discussing some changes to Bugzilla that would give
> > a bit more structure to how we label and process regressions.
> >
> > Currently we add something like "[9/10/11/12 Regression]" to the start
> > of the summary, and then edit that when it's fixed on a branch, when
> > forking a new release branch from trunk, and when the oldest branch is
> > closed and becomes unmaintained. Finding active regressions means a
> > free-text search in the summary field.
> >
> > On IRC we discussed adding a new custom field to record which branches
> > a regression affects. This could be like the
> > known-towork/known-to-fail fields, so only allow predefined values,
> > and allow searching by "all of" and by "any of". The possible values
> > for the field would only be the major releases, not all the minor
> > releases and snapshots and vendor branches that are in the
> > known-to-work field. So just 4.9, 5, 6, 7 etc. not 4.9.4, 5.0, 5.1.0,
> > 5.1.1 etc.
> >
> > When a new branch is forked from trunk before a release all bugs that
> > have "trunk" in the regression field would automatically get "12"
> > added (or if we already used "12" instead of "trunk" they'd get "13"
> > added, either way would work).
> >
> > Unlike the current system, we wouldn't need to remove closed branches
> > from the regressions field. We do that today so the Summary field
> > doesn't get cluttered with old branch info, but if it's in a separate
> > field keeping the old data present is valuable. We would only remove a
> > branch from that field when the regression is fixed on the branch. We
> > would still be able to search for regressions in active branches
> > easily, but it would also be possible to see at a glance that a given
> > regression was present on the gcc-8 branch and never fixed there. This
> > would also help vendors who maintain older branches, as the
> > information that a regression affected an old branch would not be
> > wiped out of the summary when the branch reaches EOL.
> >
> > Jakub also suggested it would be nice to be able to enter a revision
> > into a "regressed at" field and have it auto-populate the regressions
> > list with all branches that the commit is present in. (Ideally any of
> > SVN r numbers, or git revisions, or gcc-descr rNN-NNN strings
> > could be used there). That would be useful when we bisect the
> > regression and find where it started.
> >
> > Iain pointed out a drawback of not having the regression info in the
> > Summary. Currently it does draw your attention when looking at the
> > results of a bugzilla search. Andrew noted that bug aliases are
> > automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
> > shows its alias "(c++core-issues)". Maybe we could do that for
> > regressions (for the active branches only, so the result would be
> > similar to what we have today).
> >
> > Thoughts? Objections? Better ideas?
> >
>
> My immediate thought (since I tend to dislike deleting history) is why
> not have two fields?  One listing all the release branches where this
> has occurred and another for where it has now been fixed.  That way you
> can see quickly whether the regression has ever affected some versions
> of a release.

The Known To Fail field could be used for that, if we were more
diligent about using it when fixes are backported to a branch.

> Something we lack today with the single fixed in field is
> the ability to track exactly which dot releases of each branch contained
> the fix for a regression.

Yes, I use Target Milestone for the oldest dot release and then add
something like "Fixed for 9.4, 10.3, 11.2 and trunk" in the comments.
That could be more structured.

>
> Other than that, I have no other concerns at the moment.

Thanks for the feedback!

I think my concern with the idea is that when we make this info more
structured, there's a danger of giving ourselves more work to do
pushing buttons just for the sake of it. I think it's OK to make it
optional to use these fields, so that those who want to record the
rich info (e.g. me, maybe you, and Andrew Pinski when he goes on a
bugzilla tidying mission :-) can do so. The danger then is that using
them for search isn't reliable because the fields aren't used
consistently.

I think I'd still rather than the option to use them. At least for my
little corner of the project, I would aim to fill them in
consistently.


Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Jonathan Wakely via Gcc
On Wed, 15 Dec 2021 at 12:22, Tobias Burnus wrote:
>
> On 15.12.21 12:39, Jonathan Wakely via Gcc wrote:
>
> > Iain pointed out a drawback of not having the regression info in the
> > Summary. Currently it does draw your attention when looking at the
> > results of a bugzilla search. Andrew noted that bug aliases are
> > automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
> > shows its alias "(c++core-issues)".
> Wouldn't it be easier to click on the "[Change Columns]" button at the
> bottom of the search result page and add the new field to the "Selected
> Columns"? The known-to-(work/fail) columns are available, i.e. this
> feature also works with custom fields.

Yes, I'd be fine with that solution (thanks, for the reminder, I
should have mentioned that option in my initial mail).

If you reorder the "known to fail" column so it comes right before the
Summary column you would get a clear list of regressions shown before
the rest of the summary (and nothing in that column for
non-regressions).

A possible downside is that would show all the branches the regression
was on, including closed ones. Again, I'd be fine with that, but it's
a change from the info visible at a glance in the Summary today.


Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Iain Sandoe via Gcc



> On 15 Dec 2021, at 12:29, Jonathan Wakely via Gcc  wrote:
> 
> On Wed, 15 Dec 2021 at 12:22, Tobias Burnus wrote:
>> 
>> On 15.12.21 12:39, Jonathan Wakely via Gcc wrote:
>> 
>>> Iain pointed out a drawback of not having the regression info in the
>>> Summary. Currently it does draw your attention when looking at the
>>> results of a bugzilla search. Andrew noted that bug aliases are
>>> automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
>>> shows its alias "(c++core-issues)".
>> Wouldn't it be easier to click on the "[Change Columns]" button at the
>> bottom of the search result page and add the new field to the "Selected
>> Columns"? The known-to-(work/fail) columns are available, i.e. this
>> feature also works with custom fields.
> 
> Yes, I'd be fine with that solution (thanks, for the reminder, I
> should have mentioned that option in my initial mail).
> 
> If you reorder the "known to fail" column so it comes right before the
> Summary column you would get a clear list of regressions shown before
> the rest of the summary (and nothing in that column for
> non-regressions).
> 
> A possible downside is that would show all the branches the regression
> was on, including closed ones. Again, I'd be fine with that, but it's
> a change from the info visible at a glance in the Summary today.

I just tried this with my local search and it line-wraps the list so that it
does not matter too much about the number of branches reported.

However "known to fail” is not currently “regressed for” they have distinct
meanings (both of which are useful IMO).

Iain



Re: Labelling of regressions in Bugzilla

2021-12-15 Thread Jonathan Wakely via Gcc
On Wed, 15 Dec 2021 at 12:43, Iain Sandoe wrote:
>
>
>
> > On 15 Dec 2021, at 12:29, Jonathan Wakely via Gcc  wrote:
> >
> > On Wed, 15 Dec 2021 at 12:22, Tobias Burnus wrote:
> >>
> >> On 15.12.21 12:39, Jonathan Wakely via Gcc wrote:
> >>
> >>> Iain pointed out a drawback of not having the regression info in the
> >>> Summary. Currently it does draw your attention when looking at the
> >>> results of a bugzilla search. Andrew noted that bug aliases are
> >>> automatically added to the summary, e.g. https://gcc.gnu.org/PR94404
> >>> shows its alias "(c++core-issues)".
> >> Wouldn't it be easier to click on the "[Change Columns]" button at the
> >> bottom of the search result page and add the new field to the "Selected
> >> Columns"? The known-to-(work/fail) columns are available, i.e. this
> >> feature also works with custom fields.
> >
> > Yes, I'd be fine with that solution (thanks, for the reminder, I
> > should have mentioned that option in my initial mail).
> >
> > If you reorder the "known to fail" column so it comes right before the
> > Summary column you would get a clear list of regressions shown before
> > the rest of the summary (and nothing in that column for
> > non-regressions).
> >
> > A possible downside is that would show all the branches the regression
> > was on, including closed ones. Again, I'd be fine with that, but it's
> > a change from the info visible at a glance in the Summary today.
>
> I just tried this with my local search and it line-wraps the list so that it
> does not matter too much about the number of branches reported.
>
> However "known to fail” is not currently “regressed for” they have distinct
> meanings (both of which are useful IMO).

Yes I didn't mean to imply that, sorry. I was just using the
known-to-fail field to test it, because we don't have the regressions
one to test with, and I sent a muddled reply.

What I should have said is that if you reorder the hypothetical new
field you would get a clear list of regressions.


[RISCV][CANCELED] RISC-V GNU Toolchain Biweekly Sync-up call (Dec 16, 2021)

2021-12-15 Thread 吴伟
Hi all,

FYI The next RISC-V GNU Toolchain Biweekly Sync is canceled because
there were not enough new topics in the past weeks. The next meeting
is scheduled for Jan 13, 2022. We will also skip the meeting on Dec 30.

-- 
Best wishes,
Wei Wu (吴伟)


[no subject]

2021-12-15 Thread Jamal Shah via Gcc


Re: getting branch conditions using ranger

2021-12-15 Thread Andrew MacLeod via Gcc

On 12/14/21 18:55, Martin Sebor wrote:

Andrew, to improve the context of the late warnings I'm trying
to see how to get the execution path(s) leading from function
entry up to a statement.  For example, for the code below I'd
like to "collect" and show the three conditionals in the context
of the warning:

extern char a[9];

void f (int m, int n, void *s)
{
  if (m < 3) m = 3;
  if (n < 4) n = 4;
  if (s != 0)
    {
  char *d = a + 3;
  __builtin_memcpy (d, s, m + n);
    }
}

At a minimum, I'd like to print a note after the warning:

  warning: ‘__builtin_memcpy’ writing between 7 and 2147483647 bytes 
into a region of size 6 overflows the destination [-Wstringop-overflow=]


  note: when 'm >= 3 && n >= 4 && s != 0'

(The final version would point to each conditional in the source
like the static analyzer does.)



Finding the exact location of the conditional may not be possible.. the 
condition could be the aggregation of multiple statements.  The ranges 
of m and n are set via 2 statements, the condition and the assignment, 
and then collected by a 3rd, the PHI node.


The "note:" itself could simply be created from the ranges..  the range 
at the memcpy of m,  n and s are:


m_3  : int [3, +INF]
n_4  : int [4, +INF]
s_10(D) : void * [1B, +INF]

which could be easily translated to 'm >= 3 && n >= 4 && s != 0', 
assuming you can figure out the original source name.. In this case 
happens to be the VAR on the ssa-name, but you can't always count on 
that.  I'd also point out there is no statement which has m >=3 or n>= 
4, but rather a series of statements that produce that result.





For conditions that involve ranges used in the statements (i.e.,
the first two conditions in the source above), I wonder if rather
than traversing the CFG in a separate step, I might be able to
use Ranger to collect the conditions at the time it populates its
cache (i.e., when I call it to get range info for each statement).
I imagine I would need to derive a new class from gimple_ranger
and override some virtual member functions.  (And maybe also do
the same for ranger_cache?)

Does this sound like something I should be able to do within
the framework?  If yes, do you have any tips or suggestions for
where/how to start?

That is not really going to work. The cache is agnostic as to why there 
are ranges or what caused them.  It is primarily a propagation engine 
which utilizes the edge ranges provided by GORI and range-ops.  Ranger 
itself is the controller for this property propagation.


GORI does not treat any statement as special.  It is an equation solver 
which uses range-ops to algebraically solve for an unknown value when 
the other values are known.  The "if" condition at the exit of a basic 
block is considered no different than an assignment statement where the 
LHS is either [0,0] or [1,1] depending on the edge you are looking at. 
   So to use your example above for just m:


     :
    if (m_6(D) <= 2)
  goto ; [INV]   //  [1,1] = m_6 <= 2
    else
  goto ; [INV]  //    [0,0] = m_6 <= 2

2->3  (T) m_6(D) :  int [-INF, 2]
2->4  (F) m_6(D) :  int [3, +INF]

=== BB 3 
     :

=== BB 4 
Imports: n_8(D)
Exports: n_8(D)
n_8(D)  int VARYING
     :
    # m_3 = PHI     //  m_3 = union (m_6 (2->4), 
[3,3]) = union ([3, +INF],[3,3]) = [3, +INF]


m_3 : int [3, +INF]


We generate 3 ranges for m..  m_6 on the 1)edge 2->3, 2)edge 2->4 and 
3)combine them with the constant 3 to produce m_3 at the PHI.


This all happens at a fairly low level, and you wont be able to overload 
anything to be able to log it in a sensible fashion. And I'm not sure 
how you would go about logging the arbitrarily long series of actions 
that result in the range you are looking at, and then utilize it.  GORI 
could tell you that m_6 and m_3 are used in the range calculation, but 
then the IL tells you that too.




Thanks
Martin

PS I'm assuming -O0 for the above test case where the m + n
expression is a sum of two PHIs.  With -O1 and higher some of
the integer conditionals end up transformed into MAX_EXPRs so
it will likely need to be handled differently or I may not be
able to capture all the conditions reliably.  I don't know how
much that might compromise the result.


Within ranger is is irrelevant whether its from a MAX or a condition.

   [local count: 574129753]:
  _5 = MAX_EXPR ;
  _7 = MAX_EXPR ;
  _1 = _5 + _7;

_7 generates a range of [3, +INF] based on range-ops knowledge of how 
MAX works.


In both cases, for m_3 above and _7, the range of [3, +INF] is generated 
at the definition point... One is a PHI which includes some CFG 
propagation to retrieve the argument values and perform the calculation, 
and  _7 which is much simpler.. its just the global characteristic of 
the statement calculated by range-ops where the LHS is the unknown in 
the equation.  Ultimately, both ranges are created at the definition 
point of the ssa_name  _7 or m_3.


The ques

Re: getting branch conditions using ranger

2021-12-15 Thread Martin Sebor via Gcc

On 12/15/21 12:23 PM, Andrew MacLeod wrote:

On 12/14/21 18:55, Martin Sebor wrote:


Thanks a lot for the feedback!  I'll need some time to fully
digest it.  Just a few clarifying comments to explain what
I'm after.


Andrew, to improve the context of the late warnings I'm trying
to see how to get the execution path(s) leading from function
entry up to a statement.  For example, for the code below I'd
like to "collect" and show the three conditionals in the context
of the warning:

extern char a[9];

void f (int m, int n, void *s)
{
  if (m < 3) m = 3;
  if (n < 4) n = 4;
  if (s != 0)
    {
  char *d = a + 3;
  __builtin_memcpy (d, s, m + n);
    }
}

At a minimum, I'd like to print a note after the warning:

  warning: ‘__builtin_memcpy’ writing between 7 and 2147483647 bytes 
into a region of size 6 overflows the destination [-Wstringop-overflow=]


  note: when 'm >= 3 && n >= 4 && s != 0'

(The final version would point to each conditional in the source
like the static analyzer does.)



Finding the exact location of the conditional may not be possible.. the 
condition could be the aggregation of multiple statements.  The ranges 
of m and n are set via 2 statements, the condition and the assignment, 
and then collected by a 3rd, the PHI node.


The "note:" itself could simply be created from the ranges..  the range 
at the memcpy of m,  n and s are:


m_3  : int [3, +INF]
n_4  : int [4, +INF]
s_10(D) : void * [1B, +INF]

which could be easily translated to 'm >= 3 && n >= 4 && s != 0', 
assuming you can figure out the original source name.. In this case 
happens to be the VAR on the ssa-name, but you can't always count on 
that.  I'd also point out there is no statement which has m >=3 or n>= 
4, but rather a series of statements that produce that result.


Understood.






For conditions that involve ranges used in the statements (i.e.,
the first two conditions in the source above), I wonder if rather
than traversing the CFG in a separate step, I might be able to
use Ranger to collect the conditions at the time it populates its
cache (i.e., when I call it to get range info for each statement).
I imagine I would need to derive a new class from gimple_ranger
and override some virtual member functions.  (And maybe also do
the same for ranger_cache?)

Does this sound like something I should be able to do within
the framework?  If yes, do you have any tips or suggestions for
where/how to start?

That is not really going to work. The cache is agnostic as to why there 
are ranges or what caused them.  It is primarily a propagation engine 
which utilizes the edge ranges provided by GORI and range-ops.  Ranger 
itself is the controller for this property propagation.


GORI does not treat any statement as special.  It is an equation solver 
which uses range-ops to algebraically solve for an unknown value when 
the other values are known.  The "if" condition at the exit of a basic 
block is considered no different than an assignment statement where the 
LHS is either [0,0] or [1,1] depending on the edge you are looking at. 
    So to use your example above for just m:


      :
     if (m_6(D) <= 2)
   goto ; [INV]   //  [1,1] = m_6 <= 2
     else
   goto ; [INV]  //    [0,0] = m_6 <= 2

2->3  (T) m_6(D) :  int [-INF, 2]
2->4  (F) m_6(D) :  int [3, +INF]

=== BB 3 
      :

=== BB 4 
Imports: n_8(D)
Exports: n_8(D)
n_8(D)  int VARYING
      :
     # m_3 = PHI     //  m_3 = union (m_6 (2->4), 
[3,3]) = union ([3, +INF],[3,3]) = [3, +INF]


m_3 : int [3, +INF]


We generate 3 ranges for m..  m_6 on the 1)edge 2->3, 2)edge 2->4 and 
3)combine them with the constant 3 to produce m_3 at the PHI.


This all happens at a fairly low level, and you wont be able to overload 
anything to be able to log it in a sensible fashion. And I'm not sure 
how you would go about logging the arbitrarily long series of actions 
that result in the range you are looking at, and then utilize it.  GORI 
could tell you that m_6 and m_3 are used in the range calculation, but 
then the IL tells you that too.


Yes, it is in the IL, and I need to get it from there.  Ranger
traverses the IL to build up the cache and I thought if I could
hook into it I could, in addition, collect the GIMPLE_CONDs that
Ranger relies on to determine the bounds.  (For each SSA_NAME
I'm thinking I'd collect an array of either the GIMPLE_CONDs,
or perhaps more usefully the edges between the CONDs and
the branches that contribute to the lower (and maybe also upper)
bounds.

The only other way to do it that I can think of is to recreate
the same process Ranger uses, except outside of it (including
all the computation and range propagation).


Thanks
Martin

PS I'm assuming -O0 for the above test case where the m + n
expression is a sum of two PHIs.  With -O1 and higher some of
the integer conditionals end up transformed into MAX_EXPRs so
it will likely need to be handled differently or I ma

Updating phi nodes on deleting gimple statement

2021-12-15 Thread Shubham Narlawar via Gcc
Hello,


I have a PHI node that defines a variable that is used in 1 statement. I
then delete the statement. I think I need to update the PHI node to no
longer reference that variable. I looked through some code and I don't see
a way to just remove an element from a PHI node and I see in the file
omp-expand.c what looks like creating a new PHI node and copying over all
of the elements.

My question is: how do I update a PHI node per the above? Is it necessary -
if I don't do anything and leave the PHI node as is it appears to work,
though I don't know the implications of this. If I do need to update it, do
I recreate it and delete the old one?

Thanks and Regards,
Shubham