Hi,
yesterday I installed openmpi-1.7.4rc2r30323 on our machines
("Solaris 10 x86_64", "Solaris 10 Sparc", and "openSUSE Linux
12.1 x86_64" with Sun C 5.12). My rankfile "rf_linpc_sunpc_tyr"
contains the following lines.
rank 0=linpc0 slot=0:0-1;1:0-1
rank 1=linpc1 slot=0:0-1
rank 2=sunpc1 slot=1
Well, this is a little strange. The hanging behavior is gone, but I'm getting a
segfault now. The output of "hello_c.c" and "ring_c.c" are attached.
I'm getting a segfault with the Fortran test, also. I'm afraid I may have
polluted the experiment by removing the target openmpi-1.6.5 installatio
Hard to know how to address all that, Siegmar, but I'll give it a shot. See
below.
On Jan 22, 2014, at 5:34 AM, Siegmar Gross
wrote:
> Hi,
>
> yesterday I installed openmpi-1.7.4rc2r30323 on our machines
> ("Solaris 10 x86_64", "Solaris 10 Sparc", and "openSUSE Linux
> 12.1 x86_64" with Sun C
Hi Ralph, I want to ask you one more thing about default setting of
num_procs
when we don't specify the -np option and we set the cpus-per-proc > 1.
In this case, the round_robin_mapper sets num_procs = num_slots as below:
rmaps_rr.c:
130if (0 == app->num_procs) {
131/* set t
Seems like a reasonable, minimal risk request - will do
On Jan 22, 2014, at 4:28 PM, tmish...@jcity.maeda.co.jp wrote:
>
> Hi Ralph, I want to ask you one more thing about default setting of
> num_procs
> when we don't specify the -np option and we set the cpus-per-proc > 1.
>
> In this case, t
Thanks, Ralph.
I have one more question. I'm sorry to ask you many things ...
Could you tell me the difference between "map-by slot" and "map-by core".
>From my understanding, slot is the synonym of core. But those behaviors
using openmpi-1.7.4rc2 with the cpus-per-proc option are quite differe