Hi David,
thanks, fixed in SVN 163.
/// Jürgen
On 03/12/2014 06:51 AM, David Lamkins wrote:
Any error thrown from a quadSYL setter triggers causes a heap error
and backtrace.
For example:
⎕syl[5;2]←1
INDEX ERROR
⎕SYL[5;2]←1
^^
*** Error in `/usr/local/bin/apl': double f
To clarify, the appealing feature of TBB that made me interested (apart
from it being very fast) is that its algorithms implement task stealing.
This should make the dispatch quite effective, even if some subtasks are
slower than others. I.e. it may actually adress some of the concerns raised
regar
I've done some experiments with Intel's Threading Building Blocks, and
based on my initial tests, it seems incredibly light-weight, and also easy
to use.
I haven't tested with actual GNU APL code yet though (I've written separate
test programs to experiment). My next tests will be on the real thin
Hi Elias,
thanks, fixed in SVN 162. In that range it can still happen that
"small" differences occur because an operation may be internally
performed as double and then converted to integer. The double has
48-1 bit precision and the integer 64-1 bit precision. It depends a little
on how the actua
Hi,
I'm afraid cut-and-paste is the only tool around.
For performance testin it is probably simpler to )DUMP the
Perfornmance.pt testcase
at the end and then modify/start the dumped workspace with apl -f. Then
you don't need
to mess around with the matching of the interpreter output.
Or just
Hi David,
I guess the circle functions and ⋆/⍟ might do a better job in raising
your motivation!
If I remember correctly then in 1990 we got a speedup of 5-6 on our 32
processor machine,
which means that the break-even point is at about 6 cores.
Unfortunately my own machine has only 2 cores
Hi Peter,
I believe you should do something along the lines of:
CXX=llvm-gcc4 ./configure
/// Jürgen
On 03/11/2014 10:29 PM, Peter Teeson wrote:
On 2014-03-11, at 4:30 PM, Peter Teeson wrote:
Mac Pro Desktop dual quad core CPU's, Mountain Lion OS X 10.8.5
I DL'd the 2014-01-13 APL 1.2 tar
Hi Elias,
I believe we should first find out how big the thread dispatch effort
actually is.
Because coalescing can also fir back by creating unequally distributed
intermediate results.
For skalar functions you have a parallel eecution time of:
a + b×⌈N÷P where a = startup time (thread dispa