Jody,
On Linux, you can check which process are running on which core in
top, but I don't think the mac os version allows this. The OS *will*
move processes on different cores because of the time-sharing nature
of the scheduling algorithm. There are a lot more details online
about what th
Hmmm...well actually, there isn't a bug in the code. This is an
interesting question!
Here is the problem. It has to do with how -host is processed.
Remember, in the new scheme (as of 1.3.0), in the absence of any other
info (e.g., an RM allocation or hostfile), we cycle across -all- the -
Hmmm...there should be messages on both the user and devel lists
regarding binary compatibility at the MPI level being promised for
1.3.2 and beyond.
Anyway, we did make that pledge. However, as I said, I am not sure
people verified that - though I hope someone did! :-)
On Jul 15, 2009, a
Did not see any other email on the list wrt this topic.
Thanks for your response.
Jim
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
> Sent: Wednesday, July 15, 2009 4:26 PM
> To: Open MPI Users
> Subject: Re: [O
Hi OpenMPI Team,
I am trying to run a simple application that does an alltoall over an
intercommunicator and I experience hangs when I run more than 3 processes
per node. A similar program that uses intra-communicator completes fine with
upto 8 processes per node.
This is the error message I see
I believe that was the intent, per other emails on that subject.
However, I am not personally aware of people who have tested it - though
they may well exist.
On Wed, Jul 15, 2009 at 2:18 PM, Jim Kress wrote:
> > Does use of 1.3.3 require recompilation of applications that
> > were compiled us
> Does use of 1.3.3 require recompilation of applications that
> were compiled using 1.3.2?
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of jimkress_58
> Sent: Tuesday, July 14, 2009 3:05 PM
> To: us...@open-mpi.org
> Subject: R
On Jul 15, 2009, at 9:35 AM, Matthias Jurenz wrote:
Further information about performance analyse tools can be found at
http://www.open-mpi.org/faq/?category=perftools
Lin -
Note that this FAQ category was literally just added, partially as a
response to your questions.
So if you looked
As Lenny said, you should use the if_include parameter. Specifically,
it would look like this depending on which ones you want to select.
-mca btl_openib_if_include mtcha0
or
-mca btl_openib_if_include mtcha1
Rolf
On 07/15/09 09:33, nee...@crlindia.com wrote:
Thanks Ralph,
i foun
Okay, I'll dig into it - must be a bug in my code.
Sorry for the problem! Thanks for patience in tracking it down...
Ralph
On Wed, Jul 15, 2009 at 7:28 AM, Lenny Verkhovsky <
lenny.verkhov...@gmail.com> wrote:
> Thanks, Ralph,
> I guess your guess was correct, here is the display map.
>
>
> $cat
Dear Lin,
for a quick view of what is inside the trace you could try 'otfprofile'
to generate a tex/ps file with some information. This tool is a
component of the latest stand-alone version of the Open Trace Format
(OTF) - see http://www.tu-dresden.de/zih/otf/.
However, if you need more detailed
Make sure you have Open MPI 1.3 series,
I dont think the if_include param is not avaliable in 1.2 series.
max btls controls fragmentation and load balancing over similar BTLS (
example using LMC > 0, or 2 ports connected to 1 network )
you need if_include param
On Wed, Jul 15, 2009 at 4:20 PM,
Thanks Ralph,
i found the mca parameter. It is btl_openib_max_btls which
controls the available HCAs.
Thanks for helping.
Regards
Neeraj Chourasia (MTS)
Computational Research Laboratories Ltd.
(A wholly Owned Subsidiary of TATA SONS Ltd)
B-101, ICC Trade Towers, Senapati Bapat
Thanks, Ralph,
I guess your guess was correct, here is the display map.
$cat rankfile
rank 0=+n1 slot=0
rank 1=+n0 slot=0
$cat appfile
-np 1 -host witch1 ./hello_world
-np 1 -host witch2 ./hello_world
$mpirun -np 2 -rf rankfile --display-allocation -app appfile
== ALLOCATE
Take a look at the output from "ompi_info --params btl openib" and you will
see the available MCA params to direct the openib subsystem. I believe you
will find that you can indeed specify the interface.
On Wed, Jul 15, 2009 at 7:15 AM, wrote:
>
> Hi all,
>
> I have a cluster where both
Hi Robert,
Sorry if this is offtopic for the more knowledgeable here...
On 14-Jul-09, at 7:50 PM, Robert Kubrick wrote:
By setting processor affinity you can force execution of each
process on a specific core, thus limiting context switching. I know
affinity wasn't supported on MacOS last ye
Hi all,
I have a cluster where both HCA's of blade are active, but
connected to different subnet.
Is there an option in MPI to select one HCA out of available
one's? I know it can be done by making changes in openmpi code, but i need
clean interface like option during mpi launch
What is supposed to happen is this:
1. each line of the appfile causes us to create a new app_context. We store
the provided -host info in that object.
2. when we create the "allocation", we cycle through -all- the app_contexts
and add -all- of their -host info into the list of allocated nodes
3
Sorry about that - it was just an oversight in the NEWS entries for 1.3.3.
The bug fixes we discussed should all be in the new release.
Let me know if we missed any.
Ralph
On Wed, Jul 15, 2009 at 6:25 AM, Geoffroy Pignot wrote:
> Hi Lenny and Ralph,
>
> I saw nothing about rankfile in the 1.3.3
Hi Lenny and Ralph,
I saw nothing about rankfile in the 1.3.3 press release. Does it means that
the bug fixes are not included there ??
Thanks
Geoffroy
2009/7/15
> Send users mailing list submissions to
>us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
Same result.
I still suspect that rankfile claims for node in small hostlist provided by
line in the app file, and not from the hostlist provided by mpirun on HNP
node.
According to my suspections your proposal should not work(and it does not),
since in appfile line I provide np=1, and 1 host, whil
Try your "not working" example without the -H on the mpirun cmd line -
i.e.,, just use "mpirun -np 2 -rf rankfile -app appfile". Does that
work?
Sorry to have to keep asking you to try things - I don't have a setup
here where I can test this as everything is RM managed.
On Jul 15, 2009,
Thanks Ralph, after playing with prefixes it worked,
I still have a problem running app file with rankfile, by providing full
hostlist in mpirun command and not in app file.
Is is planned behaviour, or it can be fixed ?
See Working example:
$cat rankfile
rank 0=+n1 slot=0
rank 1=+n0 slot=0
$cat
Hi Ralph,
Thanks a lot for your effort and giving us freedom to choose hosts
dynamically. I am really excited to see such great feature working in my
programs.
again thank you very much :)
Regards,
On Tue, Jul 14, 2009 at 8:11 PM, Ralph Castain wrote:
> Hi Vipin
> I have added support for thes
24 matches
Mail list logo