Re: [gimplefe] [gsoc16] Gimple Front End Project

2016-03-19 Thread Richard Biener
On Fri, Mar 18, 2016 at 6:55 AM, Prathamesh Kulkarni
 wrote:
> On 15 March 2016 at 20:46, Richard Biener  wrote:
>> On Mon, Mar 14, 2016 at 7:27 PM, Michael Matz  wrote:
>>> Hi,
>>>
>>> On Thu, 10 Mar 2016, Richard Biener wrote:
>>>
 Then I'd like to be able to re-construct SSA without jumping through
 hoops (usually you can get close but if you require copies propagated in
 a special way you are basically lost for example).

 Thus my proposal to make the GSoC student attack the unit-testing
 problem by doing modifications to the pass manager and "extending" an
 existing frontend (C for simplicity).
>>>
>>> I think it's wrong to try to shoehorn the gimple FE into the C FE.  C is
>>> fundamentally different from gimple and you'd have to sprinkle
>>> gimple_dialect_p() all over the place, and maintaining that while
>>> developing future C improvements will turn out to be much work.  Some
>>> differences of C and gimple:
>>>
>>> * C has recursive expressions, gimple is n-op stmts, no expressions at all
>>> * C has type promotions, gimple is explicit
>>> * C has all other kinds of automatic conversion (e.g. pointer decay)
>>> * C has scopes, gimple doesn't (well, global and local only), i.e. symbol
>>>   lookup is much more complicated
>>> * C doesn't have exceptions
>>> * C doesn't have class types, gimple has
>>> * C doesn't have SSA (yes, I'm aware of your suggestions for that)
>>> * C doesn't have self-referential types
>>> * C FE generates GENERIC, not GIMPLE (so you'd need to go through the
>>>   gimplifier and again would feed gimple directly into the passes)
>>>
>>> I really don't think changing the C FE to accept gimple is a useful way
>>> forward.
>>
>> So I am most worried about replicating all the complexity of types and decl
>> parsing for the presumably nice and small function body parser.
> Um would it be a good idea if we separate "gimple" functions from
> regular C functions,
> say by annotating the function definition with "gimple" attribute ?

Yes, that was my idea.

> A "gimple" function should contain only gimple stmts and not C.
> eg:
> __attribute__((gimple))
> void foo(void)
> {
>   // local decls/initializers in C
>   // GIMPLE body
> }
> Or perhaps we could add a new keyword "gimple" telling C FE that this
> is a GIMPLE function.

Though instead of an attribute I would indeed use a new keyword (as you
can't really ignore the attribute and it should be an error with compilers
not knowing it).  Thus sth like

void foo (void)
__GIMPLE {
}

as it's also kind-of a "definition" specifier rather than a
declaration specifier.

>
> My intention is that we could reuse C FE for parsing types and decls
> (which I suppose is the primary
> motivation behind reusing C FE) and avoid mixing C statements with
> GIMPLE by having a separate
> GIMPLE parser for parsing GIMPLE functions.
> (I suppose the GIMPLE function parser would need to do minimal parsing
> of decls/types to recognize
> the input is a declaration and call C parsing routines for parsing the
> whole decl)

Yes, eventually the C frontend provides routines that can be used
to tentatively parse declarations / types used in the function.

> When C front-end is invoked with -fgimple it should probably only
> accept functions marked as "gimple".
> Does this sound reasonable ?

I think -fgimple would only enable recognizing the __GIMPLE keyword,
I wouldn't change all defs to GIMPLE with it.

Richard.

> Thanks,
> Prathamesh
>>
>> In private discussion we somewhat agreed (Micha - correct me ;)) that
>> iff the GIMPLE FE would replace the C FE function body parsing
>> completely (re-using name lookup infrastructure of course) and iff the
>> GIMPLE FE would emit GIMPLE directly (just NULL DECL_SAVED_TREE
>> and a GIMPLE seq in DECL_STRUCT_FUNCTION->gimple_body)
>> then "re-using" the C FE would be a way to greatly speed up success.
>>
>> The other half of the project would then be to change the pass manager
>> to do something sensible with the produced GIMPLE as well as making
>> our dumps parseable by the GIMPLE FE.
>>
>> Richard.


Mysterious decision in combine

2016-03-19 Thread Dominik Vogt
Well, at least what combine does here is a mystery to me (s390x
with -O3 in case it matters).

Rtl before combine:

-- snip --
(insn 6 3 7 2 (parallel [
(set (reg:SI 64)
(and:SI (mem:SI (reg/v/f:DI 63 [ a ]) [1 *a_2(D)+0 S4 A32])
(const_int -65521 [0x000f])))
(clobber (reg:CC 33 %cc))
]) andc-immediate.c:21 1481 {*andsi3_zarch}
 (expr_list:REG_DEAD (reg/v/f:DI 63 [ a ])
(expr_list:REG_UNUSED (reg:CC 33 %cc)
(nil
(insn 7 6 12 2 (set (reg:DI 65)
(zero_extend:DI (reg:SI 64))) andc-immediate.c:21 1207 
{*zero_extendsidi2}
 (expr_list:REG_DEAD (reg:SI 64)
(nil)))
(insn 12 7 13 2 (set (reg/i:DI 2 %r2)
(reg:DI 65)) andc-immediate.c:22 1073 {*movdi_64}
 (expr_list:REG_DEAD (reg:DI 65)
(nil)))
-- snip --

How does combine get this idea (it's the only match in the
function)?

  Trying 7 -> 12:
  Successfully matched this instruction:
  (set (reg/i:DI 2 %r2)
  (and:DI (subreg:DI (reg:SI 64) 0)
  (const_int 4294901775 [0x000f])))
  allowing combination of insns 7 and 12

=>

-- snip --
(insn 6 3 7 2 (parallel [
(set (reg:SI 64)
(and:SI (mem:SI (reg:DI 2 %r2 [ a ]) [1 *a_2(D)+0 S4 A32])
(const_int -65521 [0x000f])))
(clobber (reg:CC 33 %cc))
]) andc-immediate.c:21 1481 {*andsi3_zarch}
 (expr_list:REG_DEAD (reg:DI 2 %r2 [ a ])
(expr_list:REG_UNUSED (reg:CC 33 %cc)
(nil
(insn 12 7 13 2 (parallel [
(set (reg/i:DI 2 %r2)
(and:DI (subreg:DI (reg:SI 64) 0)
 ^^^
(const_int 4294901775 [0x000f])))
   ^^
(clobber (reg:CC 33 %cc))
]) andc-immediate.c:22 1474 {*anddi3}
 (expr_list:REG_UNUSED (reg:CC 33 %cc)
(expr_list:REG_DEAD (reg:SI 64)
(nil
-- snip --

It combines "zero extend" + "copy to hardreg" into an "and with
0x000f".  That is the correct result for combining insn 6 + 7
+ 12, however.  (Eventually the two "and"s with constant values
are not merged into a single "and" with a single constant.)

(dumps attached)

Ciao

Dominik ^_^  ^_^

-- 

Dominik Vogt
IBM Germany

;; Function andc_32_pv (andc_32_pv, funcdef_no=0, decl_uid=1973, cgraph_uid=0, 
symbol_order=0)

starting the processing of deferred insns
ending the processing of deferred insns
df_analyze called
df_worklist_dataflow_doublequeue:n_basic_blocks 3 n_edges 2 count 3 (1)


andc_32_pv

Dataflow summary:
;;  invalidated by call  0 [%r0] 1 [%r1] 2 [%r2] 3 [%r3] 4 [%r4] 5 
[%r5] 16 [%f0] 17 [%f2] 18 [%f4] 19 [%f6] 20 [%f1] 21 [%f3] 22 [%f5] 23 [%f7] 
33 [%cc] 35 [%rp] 38 [%v16] 39 [%v18] 40 [%v20] 41 [%v22] 42 [%v17] 43 [%v19] 
44 [%v21] 45 [%v23] 46 [%v24] 47 [%v26] 48 [%v28] 49 [%v30] 50 [%v25] 51 [%v27] 
52 [%v29] 53 [%v31]
;;  hardware regs used   15 [%r15] 32 [%ap] 34 [%fp]
;;  regular block artificial uses11 [%r11] 15 [%r15] 32 [%ap] 34 [%fp]
;;  eh block artificial uses 11 [%r11] 15 [%r15] 32 [%ap] 34 [%fp]
;;  entry block defs 0 [%r0] 2 [%r2] 3 [%r3] 4 [%r4] 5 [%r5] 6 [%r6] 11 
[%r11] 14 [%r14] 15 [%r15] 16 [%f0] 17 [%f2] 18 [%f4] 19 [%f6] 32 [%ap] 34 [%fp]
;;  exit block uses  2 [%r2] 11 [%r11] 14 [%r14] 15 [%r15] 34 [%fp]
;;  regs ever live   2 [%r2] 33 [%cc]
;;  ref usage   r0={1d} r2={2d,3u} r3={1d} r4={1d} r5={1d} r6={1d} r11={1d,2u} 
r14={1d,1u} r15={1d,2u} r16={1d} r17={1d} r18={1d} r19={1d} r32={1d,1u} 
r33={1d} r34={1d,2u} r63={1d,1u} r64={1d,1u} r65={1d,1u} 
;;total ref usage 34{20d,14u,0e} in 5{5 regular + 0 call} insns.
;; Reaching defs:
;;  sparse invalidated  
;;  dense invalidated   0, 1, 2, 3, 4, 5, 10, 11, 12, 13, 15
;;  reg->defs[] map:0[0,0] 2[1,2] 3[3,3] 4[4,4] 5[5,5] 6[6,6] 11[7,7] 
14[8,8] 15[9,9] 16[10,10] 17[11,11] 18[12,12] 19[13,13] 32[14,14] 33[15,15] 
34[16,16] 63[17,17] 64[18,18] 65[19,19] 

( )->[0]->( 2 )
;; bb 0 artificial_defs: { d0(0){ }d2(2){ }d3(3){ }d4(4){ }d5(5){ }d6(6){ 
}d7(11){ }d8(14){ }d9(15){ }d10(16){ }d11(17){ }d12(18){ }d13(19){ }d14(32){ 
}d16(34){ }}
;; bb 0 artificial_uses: { }
;; lr  in   
;; lr  use  
;; lr  def   0 [%r0] 2 [%r2] 3 [%r3] 4 [%r4] 5 [%r5] 6 [%r6] 11 [%r11] 14 
[%r14] 15 [%r15] 16 [%f0] 17 [%f2] 18 [%f4] 19 [%f6] 32 [%ap] 34 [%fp]
;; live  in 
;; live  gen 0 [%r0] 2 [%r2] 3 [%r3] 4 [%r4] 5 [%r5] 6 [%r6] 11 [%r11] 14 
[%r14] 15 [%r15] 16 [%f0] 17 [%f2] 18 [%f4] 19 [%f6] 32 [%ap] 34 [%fp]
;; live  kill   
;; rd  in   (0) 
;; rd  gen  (15) 
0[0],2[2],3[3],4[4],5[5],6[6],11[7],14[8],15[9],16[10],17[11],18[12],19[13],32[14],34[16]
;; rd  kill (16) 
0[0],2[1,2],3[3],4[4],5[5],6[6],11[7],14[8],15[9],16[10],17[11],18[12],19[13],32[14],34[16]
;;  UD chains for artificial uses at top
;; lr  out   2 [%r2] 11 [%r11] 14 [%r14] 15 [%r15] 32 [%ap] 34 [%fp]
;; live  out 2 [%r2] 11 [%r11] 14 [%r14] 

Leaking bitmap data in ree.c?

2016-03-19 Thread Jeff Law


Is it just me, or does find_removable_extensions leak bitmap data for 
INIT, KILL, GEN and TMP?


It calls bitmap_initialize on all of them, but never clears the bitmaps...

Am I missing something?

jeff


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Jakub Jelinek
On Fri, Mar 18, 2016 at 05:16:50PM +, paul_kon...@dell.com wrote:
> 
> > On Mar 18, 2016, at 12:53 PM, Paulo Matos  wrote:
> > 
> > 
> > 
> > On 18/03/16 15:02, Jonathan Wakely wrote:
> >> 
> >> It's probably crashing because it's too large, so if you reduce it
> >> then it won't crash.
> >> 
> > 
> > Would be curious to see what's the limit though, or if it depends on the
> > machine he's running GCC on.
> 
> It presumably depends on the machine, or rather the resource limits currently 
> in effect (ulimit etc.)  But the expected outcome when a resource limit is 
> exceeded is a clean error message saying so, not a crash.

It depends.  If the problem is e.g. running into the ulimit -s limit, then
the compiler just crashes, all the driver can do is report that it crashed.
Slowing down the compiler by testing stack depth in every function just in
case over limit testcase is being compiled is undesirable.

Jakub


Aggressive load in gcc when accessing escaped pointer?

2016-03-19 Thread Cy Cheng
Hi,

Please look at this c code:

typedef struct _PB {
  void* data;  /* required.*/
  int   f1_;
  float f2_;
} PB;

PB** bar(PB** t);

void qux(PB* c) {
  bar(&c);  /* c is escaped because of bar */
  c->f1_ = 0;
  c->f2_ = 0.f;
}

// gcc-5.2.1 with -fno-strict-aliasing -O2 on x86
callbar
movq8(%rsp), %rax  <= load pointer c
movl$0, 8(%rax)
movl$0x, 12(%rax)

// intel-icc-13.0.1 with -fno-strict-aliasing -O2 on x86
call  bar(_PB**)
movq  (%rsp), %rdx  <= load pointer c
movl  %ecx, 8(%rdx)
movq  (%rsp), %rsi   <= load pointer c
movl  %ecx, 12(%rsi)

GCC only load pointer c once, but if I implement bar in such way:
PB** bar(PB** t) {
  char* ptr = (char*)t;
  *t = (PB*)(ptr-8);
   return t;
}

Does this volatile C99 standard rule
(http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf):

   6.5.16 Assignment operators

   "3. ... The side effect of updating the stored value of the left operand
shall occur between the previous and the next sequence point."

Thanks,
CY


Re: Mysterious decision in combine

2016-03-19 Thread Richard Henderson
On 03/16/2016 11:35 PM, Dominik Vogt wrote:
> How does combine get this idea (it's the only match in the
> function)?
> 
>   Trying 7 -> 12:
>   Successfully matched this instruction:
>   (set (reg/i:DI 2 %r2)
>   (and:DI (subreg:DI (reg:SI 64) 0)
>   (const_int 4294901775 [0x000f])))
>   allowing combination of insns 7 and 12

>From the recorded nonzero_bits.

> (Eventually the two "and"s with constant values
> are not merged into a single "and" with a single constant.)

That's a concern...


r~



Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Paulo Matos


On 18/03/16 15:02, Jonathan Wakely wrote:
> 
> It's probably crashing because it's too large, so if you reduce it
> then it won't crash.
> 

Would be curious to see what's the limit though, or if it depends on the
machine he's running GCC on.

-- 
Paulo Matos



signature.asc
Description: OpenPGP digital signature


Re: [gimplefe] [gsoc16] Gimple Front End Project

2016-03-19 Thread Prathamesh Kulkarni
On 15 March 2016 at 20:46, Richard Biener  wrote:
> On Mon, Mar 14, 2016 at 7:27 PM, Michael Matz  wrote:
>> Hi,
>>
>> On Thu, 10 Mar 2016, Richard Biener wrote:
>>
>>> Then I'd like to be able to re-construct SSA without jumping through
>>> hoops (usually you can get close but if you require copies propagated in
>>> a special way you are basically lost for example).
>>>
>>> Thus my proposal to make the GSoC student attack the unit-testing
>>> problem by doing modifications to the pass manager and "extending" an
>>> existing frontend (C for simplicity).
>>
>> I think it's wrong to try to shoehorn the gimple FE into the C FE.  C is
>> fundamentally different from gimple and you'd have to sprinkle
>> gimple_dialect_p() all over the place, and maintaining that while
>> developing future C improvements will turn out to be much work.  Some
>> differences of C and gimple:
>>
>> * C has recursive expressions, gimple is n-op stmts, no expressions at all
>> * C has type promotions, gimple is explicit
>> * C has all other kinds of automatic conversion (e.g. pointer decay)
>> * C has scopes, gimple doesn't (well, global and local only), i.e. symbol
>>   lookup is much more complicated
>> * C doesn't have exceptions
>> * C doesn't have class types, gimple has
>> * C doesn't have SSA (yes, I'm aware of your suggestions for that)
>> * C doesn't have self-referential types
>> * C FE generates GENERIC, not GIMPLE (so you'd need to go through the
>>   gimplifier and again would feed gimple directly into the passes)
>>
>> I really don't think changing the C FE to accept gimple is a useful way
>> forward.
>
> So I am most worried about replicating all the complexity of types and decl
> parsing for the presumably nice and small function body parser.
Um would it be a good idea if we separate "gimple" functions from
regular C functions,
say by annotating the function definition with "gimple" attribute ?
A "gimple" function should contain only gimple stmts and not C.
eg:
__attribute__((gimple))
void foo(void)
{
  // local decls/initializers in C
  // GIMPLE body
}
Or perhaps we could add a new keyword "gimple" telling C FE that this
is a GIMPLE function.

My intention is that we could reuse C FE for parsing types and decls
(which I suppose is the primary
motivation behind reusing C FE) and avoid mixing C statements with
GIMPLE by having a separate
GIMPLE parser for parsing GIMPLE functions.
(I suppose the GIMPLE function parser would need to do minimal parsing
of decls/types to recognize
the input is a declaration and call C parsing routines for parsing the
whole decl)
When C front-end is invoked with -fgimple it should probably only
accept functions marked as "gimple".
Does this sound reasonable ?

Thanks,
Prathamesh
>
> In private discussion we somewhat agreed (Micha - correct me ;)) that
> iff the GIMPLE FE would replace the C FE function body parsing
> completely (re-using name lookup infrastructure of course) and iff the
> GIMPLE FE would emit GIMPLE directly (just NULL DECL_SAVED_TREE
> and a GIMPLE seq in DECL_STRUCT_FUNCTION->gimple_body)
> then "re-using" the C FE would be a way to greatly speed up success.
>
> The other half of the project would then be to change the pass manager
> to do something sensible with the produced GIMPLE as well as making
> our dumps parseable by the GIMPLE FE.
>
> Richard.


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Paulo Matos


On 14/03/16 16:31, Andrey Tarasevich wrote:
> Hi,
> 
> I have a source file with 700k lines of code 99% of which are printf() 
> statements. Compiling this test case crashes GCC 5.3.0 with segmentation 
> fault. 
> Can such test case be considered valid or source files of size 35 MB are too 
> much for a C compiler and it should crash? It crashes on Ubuntu 14.04 64bit 
> with 16GB of RAM. 
> 
> Cheers,
> Andrey
> 

I would think it's useful but a reduced version would be great.
Can you reduce the test? If you need a hand, I can help. Contact me
directly and I will give it a try.

Cheers,
-- 
Paulo Matos


February/March 2016 GNU Toolchain Update

2016-03-19 Thread Nick Clifton
Hi Guys,

  There are lots of new features to report about this time, so here
  goes:

  * GDB 7.11 has been released.

This release brings many new features and enhancements, including:

+ Per-inferior thread numbers.
  (thread numbers are now per inferior instead of being global).

+ GDB now allows users to specify breakpoint locations using a
  more explicit syntax (named "explicit location").

+ When hitting a breakpoint or receiving a signal while debugging
  a multi-threaded program, the debugger now shows which thread
  triggered the event.

+ Record btrace now supports non-stop mode.

+ Various improvements on AArch64 GNU/Linux:
   - Multi-architecture debugging support.
   - Displaced stepping.
   - Tracepoint support added in GDBserver.

+ Kernel-based threads support on FreeBSD.


  * Not to be outdone the GLIBC team have also announced a major new
release - version 2.23.  Full details can be found here:

  https://www.sourceware.org/ml/libc-alpha/2016-02/msg00502.html

But here are some highlights:
  
  + Unicode 8.0.0 Support

  + getaddrinfo now detects certain invalid responses on an internal
netlink socket.

  + A defect in the malloc implementation could result in the
unnecessary serialization of memory allocation requests across 
threads.  The defect is now corrected.  Users should see a
substantial increase in the concurrent throughput of allocation
requests for applications which used to trigger this bug.
Affected applications typically create and destroy threads
frequently.

  + There is now a --disable-timezone-tools configure option for
disabling the building and installing of the timezone related
utilities (zic, zdump, and tzselect).

  + The obsolete header  has been removed.  Programs that
require this header must be updated to use  instead.

  + Optimized string, wcsmbs and memory functions for IBM z13.


  Meanwhile in GCC land, work on getting the code ready for the GCC 6
  branch continues at a furious pace.  Some new features have made it
  in over the last couple of months however, and here are the details:
  
  * Initial support for the C++ Extensions for Concepts Technical
Specification, ISO 19217 (2015), has been added to G++.  This allows
code like this:

  template  concept bool Addable = requires (T t) { t + t; };
  template  T add (T a, T b) { return a + b; }

  * The new GCC warning option "-Wnonnull-compare" can be used to
generate a warning when comparing a variable with the "nonnull"
attribute against null.

  * The -Wunused-const-variable option has been extended.  A setting of
-Wunused-const-variable=1 only generates warnings about unused
static const variables in the main compilation unit, and not in
headers.  A setting of -Wunused-const-variable=2 also warns about
unused static const variables in non-system header files.  This
second setting corresponds to the old -Wunused-const-variable
behaviour but it must now be explicitly requested since in C++ it
is not an error and in C it might be hard to clean up all headers
involved.
  
  * The -fshort-double command line option has now been deprecated.

  * The ARC backend of GCC now supports a -mfpu= command line option to
select the type of floating point operations that can be used.

  * A GCC enhancement was made a while ago, but I totally failed to
report on it.  Fortunately reader David Wolfherd pointed this out
to me, so here is the news:
 
The inline assembler feature in GCC now has the ability to specify
the flags set in the condition code register as part of the output
of the asm.  This helps the compiler as it can now use that
information to improve the code that it generates after the inline
asm.

For more details see:
 
  https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#FlagOutputOperands

and:

  
http://developerblog.redhat.com/2016/02/25/new-asm-flags-feature-for-x86-in-gcc-6/


  There are also some changes to report in the binutils:

   * The binutils for ELF based targets can now handle commons as either
 the STT_COMMON or STT_OBJECT type.  A configure time option can
 select the default, and command line options to ld, gas and objcopy
 can be used to specify exactly which type should be used.
   
   * The LD linker now supports a couple of new features:

 + The command line option "-z noreloc-overflow" in the x86-64 ELF
   linker to disable relocation overflow check.
 
 + The command line options "-z nodynamic-undefined-weak" in the x86
   ELF linker can be used to avoid generating dynamic relocations
   against undefined weak symbols in executable.

   * The GAS assembler can now set ELF section flags and types via
 numeric values.  This allows extra or unusual bits in thes

Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Jonathan Wakely
On 18 March 2016 at 12:45, Paulo Matos wrote:
>
>
> On 14/03/16 16:31, Andrey Tarasevich wrote:
>> Hi,
>>
>> I have a source file with 700k lines of code 99% of which are printf() 
>> statements. Compiling this test case crashes GCC 5.3.0 with segmentation 
>> fault.
>> Can such test case be considered valid or source files of size 35 MB are too 
>> much for a C compiler and it should crash? It crashes on Ubuntu 14.04 64bit 
>> with 16GB of RAM.
>>
>> Cheers,
>> Andrey
>>
>
> I would think it's useful but a reduced version would be great.
> Can you reduce the test? If you need a hand, I can help. Contact me
> directly and I will give it a try.

It's probably crashing because it's too large, so if you reduce it
then it won't crash.


Re: ipa vrp implementation in gcc

2016-03-19 Thread kugan

On 18/01/16 20:42, Richard Biener wrote:

I have (very incomplete) prototype patches to do a dominator-based
approach instead (what is refered to downthread as non-iterating approach).
That's cheaper and is what I'd like to provide as an "utility style" interface
to things liker niter analysis which need range-info based on a specific
dominator (the loop header for example).


I am not sure if this is still an interest for GSOC. In the meantime, I 
was looking at intra procedural early VRP as suggested.


If I understand this correctly, we have to traverses the dominator tree 
forming subregion (or scope) where a variable will have certain range. 
We would have to record the ranges in the region in subregion (scope) 
context and use this to detect more (for any operation that is used as 
operands with known ranges). We will have to keep the context in stack. 
We also have to handle loop index variables.


For example,
void bar1 (int, int);
void bar2 (int, int);
void bar3 (int, int);
void bar4 (int, int);

void foo (int a, int b)
{
  int t = 0;

  //region 1
  if (a < 10)
{
  //region 2
  if (b > 10)
{
  //region 3
  bar1 (a, b);
}
  else
{
  //region 4
  bar2 (a, b);
}
}
  else
{
  //region 5
  bar3 (a, b);
}

  bar4 (a, b);
}


I am also wondering whether we should split the live ranges to get 
better value ranges (for the example shown above)?


Thanks,
Kugan


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Paul_Koning

> On Mar 18, 2016, at 12:53 PM, Paulo Matos  wrote:
> 
> 
> 
> On 18/03/16 15:02, Jonathan Wakely wrote:
>> 
>> It's probably crashing because it's too large, so if you reduce it
>> then it won't crash.
>> 
> 
> Would be curious to see what's the limit though, or if it depends on the
> machine he's running GCC on.

It presumably depends on the machine, or rather the resource limits currently 
in effect (ulimit etc.)  But the expected outcome when a resource limit is 
exceeded is a clean error message saying so, not a crash.

paul



gcc-4.9-20160316 is now available

2016-03-19 Thread gccadmin
Snapshot gcc-4.9-20160316 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.9-20160316/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.9 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_9-branch 
revision 234269

You'll find:

 gcc-4.9-20160316.tar.bz2 Complete GCC

  MD5=846d3e0ceb17b8121181413ce7de6e33
  SHA1=536f4c60eeb913b8d22011ba2ce3843f12a409d0

Diffs from 4.9-20160309 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.9
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Jakub Jelinek
On Fri, Mar 18, 2016 at 02:02:48PM +, Jonathan Wakely wrote:
> On 18 March 2016 at 12:45, Paulo Matos wrote:
> >> I have a source file with 700k lines of code 99% of which are printf() 
> >> statements. Compiling this test case crashes GCC 5.3.0 with segmentation 
> >> fault.
> >> Can such test case be considered valid or source files of size 35 MB are 
> >> too much for a C compiler and it should crash? It crashes on Ubuntu 14.04 
> >> 64bit with 16GB of RAM.
> >>
> >> Cheers,
> >> Andrey
> >>
> >
> > I would think it's useful but a reduced version would be great.
> > Can you reduce the test? If you need a hand, I can help. Contact me
> > directly and I will give it a try.
> 
> It's probably crashing because it's too large, so if you reduce it
> then it won't crash.

But if most of the lines are pretty much the same or similar, it might be
worth trying to recreate it with preprocessor macros.  Just pick up a couple
of most common lines and duplicate them as many times as needed to try to
get the testcase into similar size.

Jakub