On Wed, May 13, 2015 at 08:41:39AM -0600, Martin Sebor wrote:
> On 05/12/2015 07:40 PM, Andrew Pinski wrote:
> >On Tue, May 12, 2015 at 6:36 PM, Fei Ding wrote:
> >>I think Thiago and Eric just want to know which code-gen is better and
> >>why...
> >
> >
> >You need to understand for a complex pr
On 05/12/2015 07:40 PM, Andrew Pinski wrote:
On Tue, May 12, 2015 at 6:36 PM, Fei Ding wrote:
I think Thiago and Eric just want to know which code-gen is better and why...
You need to understand for a complex process (CISC ISAs) like x86,
there is no one right answer sometimes. You need to
On Tue, May 12, 2015 at 6:36 PM, Fei Ding wrote:
> I think Thiago and Eric just want to know which code-gen is better and why...
You need to understand for a complex process (CISC ISAs) like x86,
there is no one right answer sometimes. You need to look at each
micro-arch and understand the pipe
I think Thiago and Eric just want to know which code-gen is better and why...
2015-05-12 23:29 GMT+08:00 Eric Botcazou :
>> Note that at -O3 there is a difference still:
>> clang (3.6.0):
>> addl%esi, %edi
>> movl%edi, %eax
>> retq
>>
>> gcc (4.9.2)
>> leal
> Note that at -O3 there is a difference still:
> clang (3.6.0):
> addl%esi, %edi
> movl%edi, %eax
> retq
>
> gcc (4.9.2)
> leal(%rdi,%rsi), %eax
> ret
>
> Can't tell which is best, if any.
But what's your point exactly here? You cannot expect
Note that at -O3 there is a difference still:
clang (3.6.0):
addl%esi, %edi
movl%edi, %eax
retq
gcc (4.9.2)
leal(%rdi,%rsi), %eax
ret
Can't tell which is best, if any.
OG.
On Tue, May 12, 2015 at 4:06 AM, wrote:
>
>
>
>
>> On May 11, 2015
> On May 11, 2015, at 6:16 PM, Thiago Farina wrote:
>
> Hi,
>
> Clang 3.7 generated the following code:
>
> $ clang -S -O0 -fno-unwind-tables -fno-asynchronous-unwind-tables
> add.c -o add_att_x64.s
>
> add:
> pushq %rbp
> movq%rsp, %rbp
> movl%edi, -4(%rb
Hi,
Clang 3.7 generated the following code:
$ clang -S -O0 -fno-unwind-tables -fno-asynchronous-unwind-tables
add.c -o add_att_x64.s
add:
pushq %rbp
movq%rsp, %rbp
movl%edi, -4(%rbp)
movl%esi, -8(%rbp)
movl-4(%rbp), %esi
add