Michael Lazzaro wrote:
OK, here are the answers so far -- or more accurately, strawman
interpretations of those answers that should be objected to if they're
wrong.
1) Edge cases in array indexing:
my int @a = (1,2,3);
@a[0] # 1
@a[1] # 2
@a[2] # 3
@a[3] # undef (warning: index out-of-bounds)
@a[2**128] # EXCEPTION: index is above max allowed index
I would have thought that for SparseArrays (which require index remapping
anyway), BigInt indices should be allowed.
@a[ Inf ] # undef (warning: can't use Inf as array index)
I would have though the behaviour of out-of-range indices like @a[2**128] and
@a[Inf] ought to be consistent. Hence I'd have expected an exception here.
@a[ undef ] # 1 (warning: undefined index)
@a['foo'] # 1 (warning: non-numeric index)
@a[ NaN ] # EXCEPTION: can't use NaN as array index
@a[-1] # 3
@a[-2] # 2
@a[-3] # 1
@a[-4] # undef (warning: index out-of-bounds)
@a[-Inf] # undef (warning: can't use Inf as array index)
Another exception, I'd have thought.
2) There is a platform-dependent maximum array size, ((2**32)-1 for
32-bit platforms.) Attempting to access an index outside that range
throws an exception.
Which is why I'd expect that C<Inf> as an index should be fatal.
Note that this applies to both 'real' and 'sparse' arrays.
Given the implicit promotion of ints to BigInts everywhere else, this
seems inconsistent. At least for SparseArrays, where indices are remapped
anyway.
Damian