http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56392
Bug #: 56392 Summary: Crash while filling an odd-pitch 16bpp image with auto-vectorization enabled on x86_64 Linux platform Classification: Unclassified Product: gcc Version: 4.6.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ AssignedTo: unassig...@gcc.gnu.org ReportedBy: s.jodo...@gmail.com Created attachment 29491 --> http://gcc.gnu.org/bugzilla/attachment.cgi?id=29491 Source code to reproduce the problem I have written a very simple code to fill a 32x32 16bpp image with a constant value (1024). The pitch/stride of my image (i.e. the number of bytes between two successive lines) is large enough to hold an entire line, but is purposely set to an odd number. The code is attached to this report. I am using Linux x86_64 with gcc 4.6.1 (Ubuntu 11.10). The code runs fine with the -O0, -O1 and -O2 optimization levels. Valgrind does not report any access violation. However, as soon as I switch to -O3 or use the -ftree-vectorize option to enable auto-vectorization, the program crashes: # g++ -g -O2 -ftree-vectorize ./test.cpp -Wall -pedantic && ./a.out Segmentation fault The crash does not happen when I switch to 32bit binaries with the -m32 gcc flag. It does not occur either if I use an even pitch (e.g. pitch = width * 2 + 2). This is also a C++-related problem: the code does not crash when I use malloc() instead of the new[] operator. The problem is also present in g++ 4.4.6 and g++ 4.5.4. As I understand, this is related to memory alignment due to the odd pitch, but should not the code produced by gcc be protected against this?