Hi, I tend to believe that this is the SRA part of a fix for PR 50444 (and possibly some strict alignment SRA-related bugs too. What it does is that it decreases the alignment of the built MEM_REF type if it seems necessary. That is in cases when the alignment of the type as it arrived in the parameter is larger than the alignment of the base we are building the MEM_REF on top of or the MEM_REF we are replacing or when it is larger than the biggest power of two that divides the offset we are adding, if it is non zero).
This patch alone fixes the testcase on i686, on x86_64 the testcase stops segfaulting but the final comparison does not go well and the testcase returns non-zero exit code. However, I belive that is most probably an expansion problem, rather than SRA's. Perhaps it should be also noted that if I revert SRA's build_ref_for_model to what it was in revision 164280 and SRA creates only a MEM_REF rather than a COMPONENT_REF on top of a MEM_REF which it does now, the assignment expansion takes the "optab_handler (movmisalign_optab, mode)" route and the testcase is compiled well, with both the segfault gone and comparison yielding the correct result. I plan to have a look at what goes on in expansion with the current build_ref_for_model shortly but thought I'd post this for review and comments before that. A similar patch has passed bootstrap and testsuite on x86_64-linux, this exact one is undergoing the same as I'm writing this. Is it the good direction, should I commit the patch (to trunk and the 4.6 branch) if it passes? Thanks, Martin 2012-01-12 Martin Jambor <mjam...@suse.cz> PR tree-optimization/50444 * tree-sra.c (build_ref_for_offset): Decrease the alignment of the type of the created MEM_REF if necessary. Index: src/gcc/tree-sra.c =================================================================== --- src.orig/gcc/tree-sra.c +++ src/gcc/tree-sra.c @@ -1458,9 +1458,10 @@ build_ref_for_offset (location_t loc, tr tree exp_type, gimple_stmt_iterator *gsi, bool insert_after) { + HOST_WIDE_INT base_offset, final_offset; + unsigned int align; tree prev_base = base; tree off; - HOST_WIDE_INT base_offset; gcc_checking_assert (offset % BITS_PER_UNIT == 0); @@ -1488,24 +1489,34 @@ build_ref_for_offset (location_t loc, tr gsi_insert_before (gsi, stmt, GSI_SAME_STMT); update_stmt (stmt); + final_offset = offset / BITS_PER_UNIT; off = build_int_cst (reference_alias_ptr_type (prev_base), - offset / BITS_PER_UNIT); + final_offset); base = tmp; + align = get_object_alignment (prev_base); } else if (TREE_CODE (base) == MEM_REF) { - off = build_int_cst (TREE_TYPE (TREE_OPERAND (base, 1)), - base_offset + offset / BITS_PER_UNIT); + final_offset = base_offset + offset / BITS_PER_UNIT; + off = build_int_cst (TREE_TYPE (TREE_OPERAND (base, 1)), final_offset); off = int_const_binop (PLUS_EXPR, TREE_OPERAND (base, 1), off); + align = get_object_or_type_alignment (base); base = unshare_expr (TREE_OPERAND (base, 0)); } else { - off = build_int_cst (reference_alias_ptr_type (base), - base_offset + offset / BITS_PER_UNIT); + final_offset = base_offset + offset / BITS_PER_UNIT; + off = build_int_cst (reference_alias_ptr_type (base), final_offset); + align = get_object_alignment (base); base = build_fold_addr_expr (unshare_expr (base)); } + if (final_offset != 0) + align = MIN (align, + (1u << ctz_hwi (absu_hwi (final_offset))) * BITS_PER_UNIT); + if (align < TYPE_ALIGN (exp_type)) + exp_type = build_aligned_type (exp_type, align); + return fold_build2_loc (loc, MEM_REF, exp_type, base, off); }