AmrDeveloper wrote:

The current implementation in the incubator emits CreateComplexOp from the 
scalar and then emits ComplexBinOp to calculate the result between one complex 
and one scalar converted to complex.

This approach is different from the OGCG because in the OGCG, it creates a 
track and checks if it is a complex created from a scalar to match the 
computation type. Then, it takes the scalar and uses it directly.

This can lead to different IR generated or different results, for example

```
int _Complex a = {1, 2};
int b = 2;
int _Complex c = a * b;
```

In the OGCG will be similar to the following: 

```
int c_real = __real__ a * b;
int c_imag = __imag__ a * b;
int _Complex c = { c_real, c_imag };   // Result -> { 2, 4 }
```

The incubator will be similar to the following

```
int _Complex tmp_b = { b, 0 };
int c_real = __real__ a * __real__ tmp_b;
int c_imag = __imag__ a * __imag__ tmp_b;
int _Complex c = { c_real, c_imag };   // Result -> { 2, 0 }
```

In case of float in the incubator, we will call the runtime function with Full 
RangeKind, for example, but in OGCG, it will be just scalar bin operations.

In this PR, I removed the creation of Complex from real and depending on the 
operand, I either use Complex or Scalar bin op

https://github.com/llvm/llvm-project/pull/152915
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to