On 11/09/2016 06:31 PM, Andrew Stubbs wrote:
> Hi Martin,
>
> It looks like your change r242000 broke builds on 32-bit hosts:
>
> fold-const-call.c:1541:36: error: cannot convert 'size_t* {aka unsigned
> int*}' to 'long long unsigned int*' for argument '2' to 'const char*
> c_getstr(tree, long
Snapshot gcc-6-20161110 is now available on
ftp://gcc.gnu.org/pub/gcc/snapshots/6-20161110/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.
This snapshot has been generated from the GCC 6 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-6
Hi all,
in the course of doing some benchmarks on arm with -Os, I noticed that
some list traversion code became significantly slower since gcc 5.3 when
instruction caches are cold.
Reproducer with relevant defines copied from the Linux kernel:
struct list_head {
struct list_head *next,
Hi Nicolai,
On Fri, Nov 11, 2016 at 12:03:44AM +0100, Nicolai Stange wrote:
> in the course of doing some benchmarks on arm with -Os, I noticed that
> some list traversion code became significantly slower since gcc 5.3 when
> instruction caches are cold.
But is it smaller? This tiny example func
Hi Segher,
thanks for your prompt reply!
Segher Boessenkool writes:
> On Fri, Nov 11, 2016 at 12:03:44AM +0100, Nicolai Stange wrote:
>> in the course of doing some benchmarks on arm with -Os, I noticed that
>> some list traversion code became significantly slower since gcc 5.3 when
>> instruc
> On Nov 10, 2016, at 8:16 PM, Nicolai Stange wrote:
> ...
>
>> There is no way to ask for somewhat fast and somewhat small at the
>> same time, which seems to be what you want?
>
> No, I want small, possibly at the cost of performance to the extent of
> what's sensible. What sensible actually
Hi Nicolai,
On Fri, Nov 11, 2016 at 02:16:18AM +0100, Nicolai Stange wrote:
> >> >From the discussion on gcc-patches [1] of what is now the aforementioned
> >> r228318 ("bb-reorder: Add -freorder-blocks-algorithm= and wire it up"),
> >> it is not clear to me whether this change can actually reduce