Hi Elias,
could you please send the the output of ./configure (and
maybe config.log) ?
Seems like something is wrong with the thread synchronization.
/// Jürgen
On 10/24/2014 07:16 AM, Elias Mårtenson
wrote:
OK, I started some tests on my 80-core machine. At first I decided to run
the exact same thing as what you ran above.
As you can see, before I set the dyadic threshold, I got the expected
results. After setting it, the same command hangs with 200% CPU usage. At
the time I'm writing this mail, it's
Hi Elias,
if you used a recent SVN then you need to set the thresholds
(vector size) above which
parallel execution is performed:
(⍳4) ∘.time
10⋆⍳7
0 0 1 3 29 254 2593
0 0 1 2 25 252 2618
0 0 1 2 26 258 2682
I've tested this code, and I don't see much of an improvement as I increase
the core count:
Given the following function:
∇Z ← NCPU time LEN;T;X;tmp
⎕SYL[26;2] ← NCPU
X ← LEN⍴2J2
T ← ⎕TS
tmp ← X⋆X
Z←1 1 1 24 60 60 1000⊥⎕TS - T
∇
I'm running this command on m
Thanks, I have merged the necessary changes.
Regards,
Elias
On 22 September 2014 23:50, Juergen Sauermann wrote:
> Hi,
>
> I have finished a first shot at parallel (i.e. multicore) GNU APL: SVN 480.
>
> This version computes all scalar functions in parallel if the ravel length
> of the result
Hi,
I have finished a first shot at parallel (i.e. multicore) GNU APL:
SVN 480.
This version computes all scalar functions in parallel if the
ravel length of the result exceeds 100.
This can make the computation of small (but still > 100)