Hi Devs,
consider below reduced testcase from boost lamda.
$cat result_of_tests.cpp
#include <boost/lambda/lambda.hpp>
#include <stdio.h>
template<class A, class F> typename boost::result_of<F(A)>::type apply1(F
f, A b) {
    return f(b);
}
using namespace boost::lambda;
int main(int, char *[]) {
    int one = 1;
    int d= (apply1<int>(_1 ,one) == 1);
    printf("\n%d\n",d);
    return 0;
}
when compiled with optimization O2
$g++ result_of_tests.cpp  -I ./boost_1_70_0 -O2
$./a.out
0

And,when we compile same testcase with O0
$g++ result_of_tests.cpp  -I ./boost_1_70_0 -O0
$./a.out
1

The above testcases is demonstrated with g++ compiler but behavior is same
on clang as well.
When we replace ,
  int d= (apply1<int>(_1 ,one) == 1);
 with
  int d= (apply1<int&>(_1 ,one) == 1);

testcase gives correct result with or without optimization.

Wanted to confirm here,is it valid testcase or compiler optimization is
screwed up?

Thanks,
Navya

Reply via email to