Hi
today I tried a different rankfile and got once more a problem. :-((
> > thank you very much for your patch. I have applied the patch to
> > openmpi-1.6.4rc4.
> >
> > Open MPI: 1.6.4rc4r28022
> > : [B .][. .] (slot list 0:0)
> > : [. B][. .] (slot list 0:1)
> > : [B B][. .] (slot list 0:0-1)
On 02/07/13 01:05, Siegmar Gross wrote:
thank you very much for your patch. I have applied the patch to
openmpi-1.6.4rc4.
Open MPI: 1.6.4rc4r28022
: [B .][. .] (slot list 0:0)
: [. B][. .] (slot list 0:1)
: [B B][. .] (slot list 0:0-1)
: [. .][B .] (slot list 1:0)
: [. .][. B] (slot list 1:1)
:
Hi
thank you very much for your patch. I have applied the patch to
openmpi-1.6.4rc4.
> > thank you very much for your answer. I have compiled your program
> > and get different behaviours for openmpi-1.6.4rc3 and openmpi-1.9.
>
> Yes, something else seems to be going on for 1.9.
>
> For 1.6, tr
On 02/06/13 04:29, Siegmar Gross wrote:
Hi
thank you very much for your answer. I have compiled your program
and get different behaviours for openmpi-1.6.4rc3 and openmpi-1.9.
Yes, something else seems to be going on for 1.9.
For 1.6, try the attached patch. It works for me, but my machines
Hi
thank you very much for your answer. I have compiled your program
and get different behaviours for openmpi-1.6.4rc3 and openmpi-1.9.
> On 02/05/13 00:30, Siegmar Gross wrote:
> >
> > now I can use all our machines once more. I have a problem on
> > Solaris 10 x86_64, because the mapping of pr
On Feb 5, 2013, at 2:18 PM, Eugene Loh wrote:
> Sorry for the dumb question, but who maintains this code? OMPI, or upstream
> in the hwloc project? Where should the fix be made?
The version of hwloc in the v1.6 series is frozen at a somewhat-older version
of hwloc (1.3.2, which was the end o
On 02/05/13 13:20, Eugene Loh wrote:
On 02/05/13 00:30, Siegmar Gross wrote:
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile.
A few comments.
First of all, the heterogeneous environment had n
On 02/05/13 00:30, Siegmar Gross wrote:
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile. I removed the output from "hostfile"
and wrapped around long lines.
tyr rankfiles 114 cat rf_ex_sunpc
# m
Siegmar --
We've been talking about this offline. Can you send us an lstopo output from
your Solaris machine? Send us the text output and the xml output, e.g.:
lstopo > solaris.txt
lstopo solaris.xml
Thanks!
On Feb 5, 2013, at 12:30 AM, Siegmar Gross
wrote:
> Hi
>
> now I can use all ou
Hi
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile. I removed the output from "hostfile"
and wrapped around long lines.
tyr rankfiles 114 cat rf_ex_sunpc
# mpiexec -report-bindings -rf rf_ex_sunpc
On Jan 31, 2013, at 12:39 PM, Siegmar Gross
wrote:
> Hi
>
>> Hmmmwell, it certainly works for me:
>>
>> [rhc@odin ~/v1.6]$ cat rf
>> rank 0=odin093 slot=0:0-1,1:0-1
>> rank 1=odin094 slot=0:0-1
>> rank 2=odin094 slot=1:0
>> rank 3=odin094 slot=1:1
>>
>>
>> [rhc@odin ~/v1.6]$ mpirun -n 4
Hmmmwell, it certainly works for me:
[rhc@odin ~/v1.6]$ cat rf
rank 0=odin093 slot=0:0-1,1:0-1
rank 1=odin094 slot=0:0-1
rank 2=odin094 slot=1:0
rank 3=odin094 slot=1:1
[rhc@odin ~/v1.6]$ mpirun -n 4 -rf ./rf --report-bindings -mca
opal_paffinity_alone 0 hostname
[odin093.cs.indiana.edu:046
Hi
I applied your patch "rmaps.diff" to openmpi-1.6.4rc3r27923 and
it works for my previous rankfile.
> #3493: Handle the case where rankfile provides the allocation
> ---+
> Reporter: rhc | Owner: jsquyres
>
13 matches
Mail list logo