Hi Ralph,
I did try the corresponding oob_* parameters (sorry for not mentioning that).
Here's what I tried:
mpirun --host host1,host2 -np 2 --mca btl_tcp_port_min_v4 1 --mca
btl_tcp_port_range_v4 10 --mca oob_tcp_port_min_v4 1 --mca
oob_tcp_port_range_v4 10 sleep 100
In another wind
David Mathog wrote:
Also, in my limited testing --host and -hostfile seem to be mutually
exclusive.
No. You can use both together. Indeed, the mpirun man page even has
examples of this (though personally, I don't see having a use for
this). I think the idea was you might use a hostfile to
mpirun is not an MPI process, and so it doesn't obey the btl port params. To
control mpirun's ports (and those used by the ORTE daemons), use the
oob_tcp_port... params
On Dec 10, 2010, at 3:29 PM, Tang, Hsiu-Khuern wrote:
>
> Hi,
>
> I am trying to understand how to control the range of por
Terry is correct - not guaranteed, but that is the typical behavior.
However, you -can- guarantee that rank=0 will be on a particular host. Just run
your job:
mpirun -n 1 -host my_app : -n (N-1) my_app
This guarantees that rank=0 is on host . All other ranks will be
distributed according to t
Hi,
I am trying to understand how to control the range of ports used by Open MPI.
I tried setting the parameters btl_tcp_port_min_v4 and btl_tcp_port_range_v4,
but they don't seem to have an effect.
I am using Open MPI 1.4.2 from Debian sid, but get the same result on RHEL5.
When I run a progra
On 12/10/2010 03:24 PM, David Mathog wrote:
Ashley Pittman wrote:
For a much simpler approach you could also use these two environment
variables, this is on my current system which is 1.5 based, YMMV of course.
OMPI_COMM_WORLD_LOCAL_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
However that doesn't really
Sorry - guess I had misunderstood. Yes, if all you want is the local rank of
your own process, then this will work.
My suggestion was if you wanted the list of local procs, or to know the local
rank of your peers.
On Dec 10, 2010, at 1:24 PM, David Mathog wrote:
> Ashley Pittman wrote:
>
>>
Ashley Pittman wrote:
> For a much simpler approach you could also use these two environment
variables, this is on my current system which is 1.5 based, YMMV of course.
>
> OMPI_COMM_WORLD_LOCAL_RANK
> OMPI_COMM_WORLD_LOCAL_SIZE
That is simpler. It works on OMPI 1.4.3 too:
cat >/usr/common/bin
On 12/10/2010 01:46 PM, David Mathog wrote:
The master is commonly very different from the workers, so I expected
there would be something like
--rank0-on
but there doesn't seem to be a single switch on mpirun to do that.
If "mastermachine" is the first entry in the hostfile, or the first
m
The master is commonly very different from the workers, so I expected
there would be something like
--rank0-on
but there doesn't seem to be a single switch on mpirun to do that.
If "mastermachine" is the first entry in the hostfile, or the first
machine in a -hosts list, will rank 0 always ru
For a much simpler approach you could also use these two environment variables,
this is on my current system which is 1.5 based, YMMV of course.
OMPI_COMM_WORLD_LOCAL_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
Actually orte seems to set both OMPI_COMM_WORLD_LOCAL_RANK and
OMPI_COMM_WORLD_NODE_RANK, I can
Hi David
For what it is worth, the method suggested by
Terry Dontje and Richard Troutmann is what is used in several
generations of climate coupled models that we've been using for the
past 8+ years.
The goals are slightly different from yours:
they cut across logical boundaries
(i.e. who's at
There are no race conditions in this data. It is determined by mpirun prior to
launch, so all procs receive the data during MPI_Init and it remains static
throughout the life of the job. It isn't dynamically updated at this time (will
change in later versions), so it won't tell you if a process
> The answer is yes - sort of...
>
> In OpenMPI, every process has information about not only its own local
rank, but the local rank of all its peers regardless of what node they
are on. We use that info internally for a variety of things.
>
> Now the "sort of". That info isn't exposed via an MPI
During the 'make check' the following result is obtained--> Testing atomic_cmpset - 1 threads: PassedAssertion failed: ((5 * nthreads * nreps) == val32), function main, file atomic_cmpset.c, line 280../run_tests: line 8: 37634 Abort trap $* $threads - 2 threads: Failed - 4 thre
On 12/10/2010 07:55 AM, Ralph Castain wrote:
Ick - I agree that's portable, but truly ugly.
Would it make sense to implement this as an MPI extension, and then
perhaps propose something to the Forum for this purpose?
I think that makes sense. As core and socket counts go up, I imagine the
n
Terry Dontje wrote:
On 12/10/2010 09:19 AM, Richard Treumann wrote:
It seems to me the MPI_Get_processor_name
description is too ambiguous to make this 100% portable. I assume most
MPI implementations simply use the hostname so all processes on the
same host will return the same string.
On 12/10/2010 09:19 AM, Richard Treumann wrote:
It seems to me the MPI_Get_processor_namedescription is too ambiguous
to make this 100% portable. I assume most MPI implementations simply
use the hostname so all processes on the same host will return the
same string. The suggestion would wor
Hello Shiqing,
thank you very much for your reply. I want a working MPI implementation on my
Notebook. At the moment I still use LAM-MPI on Cygwin on Windows XP
Professional
SP3. Unfortunately LAM-MPI will not be supported any longer so that I am
looking for a replacement. I use MPI for my course
It seems to me the MPI_Get_processor_name description is too ambiguous to
make this 100% portable. I assume most MPI implementations simply use the
hostname so all processes on the same host will return the same string.
The suggestion would work then.
However, it would also be reasonable for a
Ick - I agree that's portable, but truly ugly.
Would it make sense to implement this as an MPI extension, and then perhaps
propose something to the Forum for this purpose?
Just hate to see such a complex, time-consuming method when the info is already
available on every process.
On Dec 10, 201
A more portable way of doing what you want below is to gather each
processes processor_name given by MPI_Get_processor_name, have the root
who gets this data assign unique numbers to each name and then scatter
that info to the processes and have them use that as the color to a
MPI_Comm_split ca
Hello Siegmar,
Do you have to use Open MPI that build with Cygwin? If not, you can
simply use CMake and Visual Studio to compile it. Please refer to the
README.WINDOWS file in the main directory.
Regards,
Shiqing
On 12/9/2010 4:24 PM, Siegmar Gross wrote:
Hi,
I know that you don't try to
23 matches
Mail list logo