Hi Blake,
GNU APL normally chooses the generic way. The reason is simple: there
are 3x3 = 9 combinations of INT REAL and COMPLEX arguments
of a dyadic functions (or 3×3×3=27 if you also count the axis). With 24
or so scalar functions this would give more than 200 cases
- too much for a lazy guy like me.
Another problem is that real arithmetic can lead to complex results, eg.
¯1⋆.5.
What GNU APL does is checking of the result type (INT, REAL, or COMPLEX)
rather than the argument types,
and to demote complex near-real results to real and real near-int values
to int.
Contrary to your opinion below, the most generic number type in GNU APL
is not *double* but *complex<double>*.
And that causes the failure that was fixed in SVN 219. The internal 0J0
result was not properly recognized as near-real, so it was left as is.
Type specific functions are only used if:
1. they have considerably better performance than a generic variant, and
2. they are frequently used with large arguments
That was not the case for Encode.
So the clear integer 0 was 0J0 (to be generic) and was not converted
back to 0 by mistake.
There is a serious problem with ⎕CT as such. ⎕CT is 1E¯13 by default but
our numbers can be as small
as 1E¯308. So we cannot simply set everything small (say < 1E¯13) to 0,
or make complex numbers with small
imaginary parts real, because we would loose precision when doing so.
The strategy of GNU APL is to keep
internal precision as long as possible and to demote only if absolutely
necessary. This decision is on a per-primitive
base. Encode was actually demoted, except that demotion of 0J0 did not
result in integer 0.
I would also say that your rules below are implemented in GNU APL to the
extent that they are correct.
They are not, though, since Integer + Integer can be double if the
maximum integer is exceeded (and so on...).
/// Jürgen
On 06/11/2014 05:52 PM, Blake McBride wrote:
Thanks a lot, Juergen! Disconnected from the standard, I fail to see
how a clear integer can become a complex - especially in relation to
code/decode. I think there is something fundamentally wrong.
I make the following comments just based on my own experience with
numbers, and without experience with the GNU APL code. I also think
it is highly likely you know a lot more about this than I. I just
wanted to share, a perhaps ignorant, opinion. I apologize in advanced.
Numbers have various representations including integer, floating
point, and complex. There are two ways (for the purposes of this
commentary) of performing calculations as follows:
1. Remember the exact type and perform the calculation based on the
type or circumstances, i.e.
switch (number type) {
case INTEGER: int_res = int_x + int_y; break;
case: FLOAT: float_res = float_x + float_y; break;
case: COMPLEX: complex_res = complex_x + complex_y; break;
}
2. Do it generically:
res = x + y;
In other words, in all places in a C program, one can define all
numbers to be double (generic math), and do all calculations on
doubles. The problem, of course, is the unfixable rounding errors in
cases when only needing integer calculations.
It is a lot more work to remember the data types and keep the math at
the simplest level than to generalize it as all just math in too broad
a generic way.
Irrespective of any "standards", I fail to see operating on integers
in an integer-only way can bypass reals and become complex numbers.
It seems like the math is being done way too generically. If that is
true, there is ultimately no ⎕CT tweaking that will ever reliably fix
the problem.
I think the system should incorporate rules. Things like:
a. integer plus/minus/times integer always equals integer
b. integer divided by integer produces float but never complex
c. float plus/minus/times/divide float/integer produced float and
never complex
d. etc.
It is not possible for ⎕CT to substitute for rules like these. ⎕CT
can only be used to minimize problems but never to eliminate them.
Although, there are many situations where there is utterly no fix,
utilizing rules like these makes the system easy for the programmer to
deal with - i.e. he knows when he's created a problem (like using
division).
Just for grins, I tried the problem on IBM APL 2. It had no problem
with 200. Is there a ⎕CT test I can do to determine if they are
involving ⎕CT?
One serious fear is that APL uses 0 to represent false. Conditional
statements control the flow of a program. What happens if we cannot
rely on zero being zero?
With deep respect and appreciation for what you have done,
Blake
On Wed, Jun 11, 2014 at 7:54 AM, Juergen Sauermann
<juergen.sauerm...@t-online.de <mailto:juergen.sauerm...@t-online.de>>
wrote:
Hi,
I have changed the code so that near-zero complex numbers in ⊤ are
demoted to integer 0, see SVN 319. This isn't quite in line with
the standard
who says that ⎕CT is not used in ⊤, but makes more sense to me.
/// Jürgen