http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49330

--- Comment #4 from rguenther at suse dot de <rguenther at suse dot de> 
2011-06-09 10:08:55 UTC ---
On Thu, 9 Jun 2011, jakub at gcc dot gnu.org wrote:

> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49330
> 
> Jakub Jelinek <jakub at gcc dot gnu.org> changed:
> 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
>                  CC|                            |jakub at gcc dot gnu.org
> 
> --- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> 2011-06-09 
> 10:04:28 UTC ---
> Ugh.
> base_alias_check for
> (symbol_ref:DI ("x") <var_decl 0x7ffff1a32000 x>)
> and
> (plus:DI (reg:DI 62 [ d.1 ])
>     (symbol_ref:DI ("y") <var_decl 0x7ffff1a320a0 y>))
> returns 0.  I'm afraid dropping the base_alias_check call would significantly
> penalize generated code, after all we still sometimes have MEM accesses 
> without
> MEM_EXPR.  Perhaps we could rely on REG_POINTER bits, extend them to 
> SYMBOL_REF
> too and only do return NULL from find_base_term if SYMBOL_REF doesn't have
> SYMBOL_REF_POINTER bit set.  During CSE etc., when replacing a pseudo with its
> definition REG_POINTER/SYMBOL_REF_POINTER would be kept only if it is set on
> the pseudo being optimized away as well.  Thus, any casts from pointers to
> correspondingly sized integers and back would be seen in the RTL.

Maybe restrict the base_alias_check to var-decls that do not have
their address taken?  Points-to analysis should conver them.

Richard.

Reply via email to