have the CPU bindings shown as
>> well
>>
>> * If using "--report-bindings --bind-to-core" with OpenMPI 1.4.1 then the
>> bindings on just the head node are shown. In 1.6.1, full bindings across
>> all hosts are shown. (I'd have to read release notes on this...)
uot;--report-bindings --bind-to-core" with OpenMPI 1.4.1 then the
> bindings on just the head node are shown. In 1.6.1, full bindings across all
> hosts are shown. (I'd have to read release notes on this...)
>
> --john
>
>
> -----Original Message-
> From
his...)
--john
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Thursday, March 27, 2014 7:01 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts (from MPI error messages)
Oooh...it's Jeff's fault!
Fwiw you
Yes, that is correct
Ralph
On Thu, Mar 27, 2014 at 4:15 PM, Gus Correa wrote:
> On 03/27/2014 05:58 PM, Jeff Squyres (jsquyres) wrote:
>
>> On Mar 27, 2014, at 4:06 PM, "Sasso, John (GE Power & Water, Non-GE)"
>>
> wrote:
>
>>
>> Yes, I noticed that I could not find --display-map in any of t
Am 27.03.2014 um 23:59 schrieb Dave Love:
> Reuti writes:
>
>> Do all of them have an internal bookkeeping of granted cores to slots
>> - i.e. not only the number of scheduled slots per job per node, but
>> also which core was granted to which job? Does Open MPI read this
>> information would be
On 03/27/2014 05:58 PM, Jeff Squyres (jsquyres) wrote:
On Mar 27, 2014, at 4:06 PM, "Sasso, John (GE Power & Water, Non-GE)"
wrote:
Yes, I noticed that I could not find --display-map in any of the man pages.
Intentional?
Oops; nope. I'll ask Ralph to add it...
Nah ...
John: As far as I
Oooh...it's Jeff's fault!
Fwiw you can get even more detailed mapping info with --display-devel-map
Sent from my iPhone
> On Mar 27, 2014, at 2:58 PM, "Jeff Squyres (jsquyres)"
> wrote:
>
>> On Mar 27, 2014, at 4:06 PM, "Sasso, John (GE Power & Water, Non-GE)"
>> wrote:
>>
>> Yes, I no
Reuti writes:
> Do all of them have an internal bookkeeping of granted cores to slots
> - i.e. not only the number of scheduled slots per job per node, but
> also which core was granted to which job? Does Open MPI read this
> information would be the next question then.
OMPI works with the bindi
On Mar 27, 2014, at 4:06 PM, "Sasso, John (GE Power & Water, Non-GE)"
wrote:
> Yes, I noticed that I could not find --display-map in any of the man pages.
> Intentional?
Oops; nope. I'll ask Ralph to add it...
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http:
e!
Would the GE manager buy that? :)
I hope this helps,
Gus Correa
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
Sent: Thursday, March 27, 2014 2:06 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts (from MPI error messages)
gt; Would the GE manager buy that? :)
>
> I hope this helps,
> Gus Correa
>
>>
>> -Original Message-----
>> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
>> Sent: Thursday, March 27, 2014 2:06 PM
>> To: Open MPI Users
>> Sub
Yes, I noticed that I could not find --display-map in any of the man pages.
Intentional?
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
Sent: Thursday, March 27, 2014 3:26 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts
On 03/27/2014 03:02 PM, Ralph Castain wrote:
Or use --display-map to see the process to node assignments
Aha!
That one was not on my radar.
Maybe because somehow I can't find it in the
OMPI 1.6.5 mpiexec man page.
However, it seems to work with that version also, which is great.
(--display-map
Thank you! That also works and is very helpful.
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Thursday, March 27, 2014 3:03 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts (from MPI error messages)
Or use
ursday, March 27, 2014 2:06 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts (from MPI error messages)
Hi John
Take a look at the mpiexec/mpirun options:
-report-bindings (this one should report what you want)
and maybe also also:
-bycore, -bysocket, -bind-to-core, -bind-t
Or use --display-map to see the process to node assignments
Sent from my iPhone
> On Mar 27, 2014, at 11:47 AM, Gus Correa wrote:
>
> PS - The (OMPI 1.6.5) mpiexec default is -bind-to-none,
> in which case -report-bindings won't report anything.
>
> So, if you are using the default,
> you can
PS - The (OMPI 1.6.5) mpiexec default is -bind-to-none,
in which case -report-bindings won't report anything.
So, if you are using the default,
you can apply Joe Landman's suggestion
(or alternatively use the MPI_Get_processor_name function,
in lieu of uname(&uts); cpu_name = uts.nodename; ).
Ho
f Of Gus Correa
Sent: Thursday, March 27, 2014 2:06 PM
To: Open MPI Users
Subject: Re: [OMPI users] Mapping ranks to hosts (from MPI error messages)
Hi John
Take a look at the mpiexec/mpirun options:
-report-bindings (this one should report what you want)
and maybe also also:
-bycore, -bysocket
Hi John
Take a look at the mpiexec/mpirun options:
-report-bindings (this one should report what you want)
and maybe also also:
-bycore, -bysocket, -bind-to-core, -bind-to-socket, ...
and similar, if you want more control on where your MPI processes run.
"man mpiexec" is your friend!
I hope
On 03/27/2014 01:53 PM, Sasso, John (GE Power & Water, Non-GE) wrote:
When a piece of software built against OpenMPI fails, I will see an
error referring to the rank of the MPI task which incurred the failure.
For example:
MPI_ABORT was invoked on rank 1236 in communicator MPI_COMM_WORLD
with e
When a piece of software built against OpenMPI fails, I will see an error
referring to the rank of the MPI task which incurred the failure. For example:
MPI_ABORT was invoked on rank 1236 in communicator MPI_COMM_WORLD
with errorcode 1.
Unfortunately, I do not have access to the software code,
21 matches
Mail list logo