This is a subject that has interested me a lot, and after coming to a
similar conclusion as Jürgen, decided to see if it was possible to get
around many of the problems by eliminating the main cause of bad
parallelism: Side effects.
I have been working an experimental APL interpreter (mostly doing
Hi Xtian,
the problem with that example is that SolveSuduku and even
the lambda {SolveSudoku ⍵} are defined functions and
therefore allowed to have side effects.
These side effects need to be taken care of and that causes
either a considerable synchron
Well I saw a couple times where parallelism could have been very useful.
Something like:
{SolveSudoku ⍵}⍤2 ⊣ Cube_of_9x9x1000_sudoku
{SolveSudoku ⍵}¨ big_array_of_enclosed_9x9_sudoku
but I don't want ⍤ (rank operator) or ¨ (foreach)
operators doing parallelism per default, so I tho
Of course. Simple scalar functions would be the worst to parallelize.
We always need a large amount of operations per thread to be worthwhile.
Something like the following might be a good thing to start
⌹⍤2 X
where (LargeNumber = ×/¯2↓⍴X) ∧ 2<⍴⍴X
To support general function instead of builtins
Hi,
maybe, maybe not.
Our earlier measurements on an 80-core machine indicated that
the way how the cores connect to the memories seems to determine
the
parallel performance that can be achieved in GNU APL.
One can easily prove tha
> On Aug 26, 2016, at 1:12 PM, enz...@gmx.com wrote:
>
> finally a computer just perfect for gnuapl
>
> http://thehackernews.com/2016/08/powerful-multicore-processor.html
Now is the perfect time to invest your time and effort in improving parallel
efficiency in gnu-apl.