http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55139



--- Comment #1 from Andi Kleen <andi-gcc at firstfloor dot org> 2012-11-07 
04:03:53 UTC ---

This is an interesting one. This is the gcc code:



enum memmodel

{

  MEMMODEL_RELAXED = 0,

  MEMMODEL_CONSUME = 1,

  MEMMODEL_ACQUIRE = 2,

  MEMMODEL_RELEASE = 3,

  MEMMODEL_ACQ_REL = 4,

  MEMMODEL_SEQ_CST = 5,

  MEMMODEL_LAST = 6

};

#define MEMMODEL_MASK ((1<<16)-1)



 enum memmodel model;



  model = get_memmodel (CALL_EXPR_ARG (exp, 2));

  if ((model & MEMMODEL_MASK) != MEMMODEL_RELAXED

      && (model & MEMMODEL_MASK) != MEMMODEL_SEQ_CST

      && (model & MEMMODEL_MASK) != MEMMODEL_RELEASE)

    {

      error ("invalid memory model for %<__atomic_store%>");

      return NULL_RTX;

    }



HLE_STORE is 1 << 16, so outside the enum range



But when looking at the assembler we see that the & MEMMODEL_MASK

gets optimized away, it just generates a direct sequence of 32bit cmps.  



This makes all the != fail, even though they should succeed



I presume the optimizer assumes nothing can be outside the enum.



I tried to expand the enum by adding



  MEMMODEL_ARCH1 =  1 << 16,

  MEMMODEL_ARCH2 =  1 << 17,

  MEMMODEL_ARCH3 =  1 << 18,

  MEMMODEL_ARCH4 =  1 << 19



But still doesn't work.



Questions:

- Is it legal for the optimizer to assume this?

- Why does extending the enum not help?



We could fix it by not using an enum here of course, but I wonder if this is an

underlying optimizer bug.

Reply via email to