gcc behavior on memory exhaustion

2017-08-09 Thread Andrew Roberts

I routinely build the weekly snapshots and RC's, on x64, arm and aarch64.

The last gcc 8 snapshot and the two recent 7.2 RC's have failed to build 
on aarch64 (Raspberry Pi 3, running Arch Linux ARM). I have finally 
traced this to the system running out of memory. I guess a recent kernel 
update had changed the memory page size and the swap file was no longer 
being used because the page sizes didn't match.


Obviously this is my issue, but the error's I was getting from gcc did 
not help. I was getting ICE's, thus:


/usr/local/gcc/bin/g++ -Wall -Wextra -Wno-ignored-qualifiers 
-Wno-sign-compare -Wno-write-strings -std=c++14 -pipe -march=armv8-a 
-mcpu=cortex-a53 -mtune=cortex-a53 -ftree-vectorize -O3 
-DUNAME_S=\"linux\" -DUNAME_M=\"aarch64\" -DOSMESA=1 -I../libs/include 
-DRASPBERRY_PI -I/usr/include/freetype2 -I/usr/include/harfbuzz 
-I/usr/include/unicode   -c -o glerr.o glerr.cpp

{standard input}: Assembler messages:
{standard input}: Warning: end of file not at end of a line; newline 
inserted

{standard input}:204: Error: operand 1 must be an integer register -- `mov'
{standard input}: Error: open CFI at the end of file; missing 
.cfi_endproc directive

g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.
make: *** [: glerr.o] Error 4
make: *** Waiting for unfinished jobs

I was seeing the problem when building using make -j2. Both building gcc 
and building large user projects.


There are two issues here:

1) There was discussion about increasing the amount of memory gcc would 
reserve to help speed up compilation of large source files, I wondered 
if this could be a factor.


2) It would be nice to see some sort of out of memory error, rather than 
just an ICE.


The system has 858Mb of  RAM without the swap file.

Building a single source file seems to use up to 97% of the available 
memory (for a 2522 line C++ source file).


make -j2 is enough to cause the failure.

Regards

Andrew Roberts








Re: regression for microblaze architecture

2017-08-09 Thread Martin Liška
On 05/27/2017 06:09 PM, Michael Eager wrote:
> On 05/27/2017 01:51 AM, Waldemar Brodkorb wrote:
>> Hi,
>>
>> Buildroot and OpenADK have samples to create a Linux system to be
>> bootup in Qemu system emulation for microblaze architecture.
>>
>> With gcc 6.3 and 7.1 the samples are not working anymore,
>> because the Linux system userland does not boot.
>> Qemu 2.9.0:
>> Kernel panic - not syncing: Attempted to kill init!
>> exitcode=0x000b
>> (with glibc, musl and uClibc-ng toolchains)
>>
>> I bisected gcc source code and found the bad commit:
>> 6dcad60c0ef48af584395a40feeb256fb82986a8
>>
>> When reverting the change, gcc 6.3 and 7.1 produces working
>> Linux rootfs again.
>>
>> What can we do about it?
> 
> I will revert this commit in GCC.
> 
> 

Hi.

Looks the revert caused:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68485

Thanks,
Martin


Re: gcc behavior on memory exhaustion

2017-08-09 Thread Markus Trippelsdorf
On 2017.08.09 at 14:05 +0100, Andrew Roberts wrote:
> I routinely build the weekly snapshots and RC's, on x64, arm and aarch64.
> 
> The last gcc 8 snapshot and the two recent 7.2 RC's have failed to build on
> aarch64 (Raspberry Pi 3, running Arch Linux ARM). I have finally traced this
> to the system running out of memory. I guess a recent kernel update had
> changed the memory page size and the swap file was no longer being used
> because the page sizes didn't match.
> 
> Obviously this is my issue, but the error's I was getting from gcc did not
> help. I was getting ICE's, thus:
> 
> /usr/local/gcc/bin/g++ -Wall -Wextra -Wno-ignored-qualifiers
> -Wno-sign-compare -Wno-write-strings -std=c++14 -pipe -march=armv8-a
> -mcpu=cortex-a53 -mtune=cortex-a53 -ftree-vectorize -O3 -DUNAME_S=\"linux\"
> -DUNAME_M=\"aarch64\" -DOSMESA=1 -I../libs/include -DRASPBERRY_PI
> -I/usr/include/freetype2 -I/usr/include/harfbuzz -I/usr/include/unicode   -c
> -o glerr.o glerr.cpp
> {standard input}: Assembler messages:
> {standard input}: Warning: end of file not at end of a line; newline
> inserted
> {standard input}:204: Error: operand 1 must be an integer register -- `mov'
> {standard input}: Error: open CFI at the end of file; missing .cfi_endproc
> directive
> g++: internal compiler error: Killed (program cc1plus)
> Please submit a full bug report,
> with preprocessed source if appropriate.
> See  for instructions.
> make: *** [: glerr.o] Error 4
> make: *** Waiting for unfinished jobs
> 
> I was seeing the problem when building using make -j2. Both building gcc and
> building large user projects.
> 
> There are two issues here:
> 
> 1) There was discussion about increasing the amount of memory gcc would
> reserve to help speed up compilation of large source files, I wondered if
> this could be a factor.
> 
> 2) It would be nice to see some sort of out of memory error, rather than
> just an ICE.

"internal compiler error: Killed" is almost always an out of memory
error. dmesg will show that the OOM killer kicked in and killed the
cc1plus process.

> The system has 858Mb of  RAM without the swap file.
> 
> Building a single source file seems to use up to 97% of the available memory
> (for a 2522 line C++ source file).
> 
> make -j2 is enough to cause the failure.

Well, you should really use a cross compiler for this setup.

-- 
Markus


Re: gcc behavior on memory exhaustion

2017-08-09 Thread Andrew Haley
On 09/08/17 14:05, Andrew Roberts wrote:
> 2) It would be nice to see some sort of out of memory error, rather than 
> just an ICE.

There's nothing we can do: the kernel killed us.  We can't emit any
message before we die.  (killed) tells you that we were killed, but
we don't know who done it.

-- 
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. 
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: gcc behavior on memory exhaustion

2017-08-09 Thread Andrew Roberts

On 09/08/17 14:05, Andrew Roberts wrote:

I routinely build the weekly snapshots and RC's, on x64, arm and aarch64.

The last gcc 8 snapshot and the two recent 7.2 RC's have failed to 
build on aarch64 (Raspberry Pi 3, running Arch Linux ARM). I have 
finally traced this to the system running out of memory. I guess a 
recent kernel update had changed the memory page size and the swap 
file was no longer being used because the page sizes didn't match.


Obviously this is my issue, but the error's I was getting from gcc did 
not help. I was getting ICE's, thus:


/usr/local/gcc/bin/g++ -Wall -Wextra -Wno-ignored-qualifiers 
-Wno-sign-compare -Wno-write-strings -std=c++14 -pipe -march=armv8-a 
-mcpu=cortex-a53 -mtune=cortex-a53 -ftree-vectorize -O3 
-DUNAME_S=\"linux\" -DUNAME_M=\"aarch64\" -DOSMESA=1 -I../libs/include 
-DRASPBERRY_PI -I/usr/include/freetype2 -I/usr/include/harfbuzz 
-I/usr/include/unicode   -c -o glerr.o glerr.cpp

{standard input}: Assembler messages:
{standard input}: Warning: end of file not at end of a line; newline 
inserted
{standard input}:204: Error: operand 1 must be an integer register -- 
`mov'
{standard input}: Error: open CFI at the end of file; missing 
.cfi_endproc directive

g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See  for instructions.
make: *** [: glerr.o] Error 4
make: *** Waiting for unfinished jobs

I was seeing the problem when building using make -j2. Both building 
gcc and building large user projects.


There are two issues here:

1) There was discussion about increasing the amount of memory gcc 
would reserve to help speed up compilation of large source files, I 
wondered if this could be a factor.


2) It would be nice to see some sort of out of memory error, rather 
than just an ICE.


The system has 858Mb of  RAM without the swap file.

Building a single source file seems to use up to 97% of the available 
memory (for a 2522 line C++ source file).


make -j2 is enough to cause the failure.

Regards

Andrew Roberts

For what its worth, if I disable the swap on a 32 bit Raspberry Pi3 
(armv7l), with 936Kb free memory, building my project with make -j4, 
only uses 33% of the memory, and does not ICE.


So there seems a huge memory usage regression for aarch64 vs arm. Nearly 
3x the memory usage is more than you would expect by doubling pointer 
sizes. Does aarch64 use a different default preallocation of memory?


Regards

Andrew



Re: gcc behavior on memory exhaustion

2017-08-09 Thread Yuri Gribov
On Wed, Aug 9, 2017 at 2:49 PM, Andrew Haley  wrote:
> On 09/08/17 14:05, Andrew Roberts wrote:
>> 2) It would be nice to see some sort of out of memory error, rather than
>> just an ICE.
>
> There's nothing we can do: the kernel killed us.  We can't emit any
> message before we die.  (killed) tells you that we were killed, but
> we don't know who done it.

Well, driver could check syslog...

> --
> Andrew Haley
> Java Platform Lead Engineer
> Red Hat UK Ltd. 
> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: gcc behavior on memory exhaustion

2017-08-09 Thread Andreas Schwab
On Aug 09 2017, Yuri Gribov  wrote:

> On Wed, Aug 9, 2017 at 2:49 PM, Andrew Haley  wrote:
>> On 09/08/17 14:05, Andrew Roberts wrote:
>>> 2) It would be nice to see some sort of out of memory error, rather than
>>> just an ICE.
>>
>> There's nothing we can do: the kernel killed us.  We can't emit any
>> message before we die.  (killed) tells you that we were killed, but
>> we don't know who done it.
>
> Well, driver could check syslog...

The syslog is very system dependent and may not even be readable by
unprivileged users.

Andreas.

-- 
Andreas Schwab, SUSE Labs, sch...@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."


Re: What to do about all the gcc.dg/guality test failures?

2017-08-09 Thread Jeff Law
On 08/08/2017 01:38 PM, Steve Ellcey wrote:
> I was wondering if something needs to be done about the gcc.dg/guality tests.
> 
> There are two main issues I see with these tests, one is that they are often
> not run during testing and so failures do not show up.  I looked into this
> and found that, at least on my ubuntu 16.04 system, the kernel parameter
> kernel.yama.ptrace_scope is set to 1 by default.  This limits the use of
> ptrace to direct child processes and causes the guality tests to not run
> on my system.  They also don't show up as failures, all you get is a message
> in the test log that says 'gdb: took too long to attach'.  After changing this
> to 0, the guality tests do get run.
> 
> The second problem is that many of the tests fail when they are run.
> For example, looking at some August test runs:
> 
> x86_64 failures:  https://gcc.gnu.org/ml/gcc-testresults/2017-08/msg00651.html
> aarch64 failures: https://gcc.gnu.org/ml/gcc-testresults/2017-08/msg00603.html
> mips64 failures:  https://gcc.gnu.org/ml/gcc-testresults/2017-08/msg00527.html
> s390x failures:   https://gcc.gnu.org/ml/gcc-testresults/2017-08/msg00509.html
> 
> These all show many failures in gcc.dg/guality.  Most of the failures
> are related to using the '-O2 -flto' or '-O3' options.  If I remove those
> option runs I get 15 failures involving 5 tests on my aarch64 system:
> 
>   gcc.dg/guality/pr36728-1.c
>   gcc.dg/guality/pr41447-1.c
>   gcc.dg/guality/pr54200.c
>   gcc.dg/guality/pr54693-2.c
>   gcc.dg/guality/vla-1.c
> 
> So I guess there are number of questions:  Are these tests worth runnning?
> Do they make sense with -O3 and/or -O2 -flto?   If they make sense and 
> should be run do we need to fix GCC to clean up the failures?  Or should
> we continue to just ignore them?
I look at them strictly from a regression standpoint.  Whatever set
passes from my baseline run must continue to pass after whatever changes
I'm contemplating.

They're unfortunately very fragile (in terms of how they behave on
different versions of gdb, kernels, etc), so absolute pass/fail status
isn't particularly helpful.

Jeff


Re: [PATCH] Write dependency information (-M*) even if there are errors

2017-08-09 Thread Jeff Law
On 08/06/2017 01:59 AM, Boris Kolpackov wrote:
> Hi,
> 
> Currently GCC does not write extracted header dependency information
> if there are errors. However, this can be useful when dealing with
> outdated generated headers that trigger errors which would have been
> resolved if we could update it. A concrete example in our case is a
> version check with #error.
> 
> The included (trivial) patch changes this behavior. Note also that
> this is how Clang already behaves. I've tested the patch in build2
> and everything works well (i.e., no invalid dependency output in the
> face of various preprocessor errors such as #error, stray #else, etc).
> 
> While I don't foresee any backwards-compatibility issues with such
> an unconditional change (after all, the compiler still exists with
> an error status), if there are concerns, I could re-do it via an
> option (e.g., -ME, analogous to -MG).
> 
> P.S. I have the paperwork necessary to contribute on file with FSF.
This directly reverts part of Joseph's changes from 2009.   I'd like to
hear from him on this change.



commit 7f5f395354b35ab7f472d03dbcce1301ac4f8257
Author: jsm28 
Date:   Sun Mar 29 22:56:07 2009 +

PR preprocessor/34695

[ ... ]
git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@145263
138bc75d-0d04-0410-961f-82ee72b054a4



Re: [PATCH] Write dependency information (-M*) even if there are errors

2017-08-09 Thread Joseph Myers
On Wed, 9 Aug 2017, Jeff Law wrote:

> This directly reverts part of Joseph's changes from 2009.   I'd like to
> hear from him on this change.

The point of those changes was to make cpplib diagnostics use the 
compiler's diagnostic machinery rather than a separate set of diagnostic 
machinery in cpplib.  The description, regarding dependency information 
generation, in the original patch description 
, is "the code 
in cpplib that checked for errors before deciding whether to write 
dependency output no longer does so (instead, the compiler has the same 
check, but this time based on whether there were any errors at all, 
whether compiler or preprocessor)".

That is, that patch wasn't meant to make any change to how errors affect 
dependency generation beyond causing compiler errors to be handled the 
same as preprocessor errors.

I suppose a question for the present proposal would be making sure any 
dependencies generated in this case do not include dependencies on files 
that don't exist (so #include "some-misspelling.h" doesn't create any sort 
of dependency on such a header).

-- 
Joseph S. Myers
jos...@codesourcery.com


gcc-6-20170809 is now available

2017-08-09 Thread gccadmin
Snapshot gcc-6-20170809 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/6-20170809/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 6 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-6-branch 
revision 251011

You'll find:

 gcc-6-20170809.tar.xzComplete GCC

  SHA256=cb44a7c3a9c42ae527a0b14cb7af43c1a7adc1d730cc505a52edb365c73206dd
  SHA1=4a6987dd33168b212c6bf59feffbf08616673a5b

Diffs from 6-20170802 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-6
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.