Forgive me, but I am now fully confused - case 1 and case 3 appear identical to
me, except for the debug-daemons flag on case 3.
On Jul 15, 2014, at 7:56 AM, Ricardo Fernández-Perea
wrote:
> What I mean with "another mpi process".
> I have 4 nodes where there is process that use mpi and whe
Hi Na Zhang,
It seems likely that on your Open MPI 1.8.1 run that you have the 2 ranks
running on one host whereas on the 1.6.5 results they are running on 2 hosts.
You should be able to verify that by running top on one of the nodes during the
1.8.1 runs and see if you have 2 or 0 osu_latency
Hi,
I'm new to mpi and open-mpi. All in the same day is testing me. My thought is
to compare openmpi to shared mem for ad-hoc inter-process channels on the same
machine.
I've created a couple of little examples of a publisher and subscriber using
MPI_Comm_accept and MPI_Comm_connect. (Ubuntu 1
Forgot to add the config info about the tests in previous post.
We have FDR IB, two nodes (HP DL 380 G8 sever) communicating through a
Mellanox Switch, RDMA mode, hyperthreading enabled.
Thanks,
Na Zhang
On Tue, Jul 15, 2014 at 12:00 PM, Na Zhang wrote:
> Dear developers and users,
>
> I am t
Dear developers and users,
I am trying to run OSU benchmark tests (like osu_latency, osu_bw, etc)
with Open MPI. While I was able to run the tests with both versions
(1.6.5, 1.8.1, same default build), I got disparate performance results.
Please see data below. I wonder what reason would cause th
What I mean with "another mpi process".
I have 4 nodes where there is process that use mpi and where initiated
using mpirun from the control node already running when I run the command
against any of those nodes it execute but when I do it against any other
node it fails if no_tree_spawn flag
I'm afraid I don't understand your comment about "another mpi process". Looking
at your output, it would appear that there is something going on with host
nexus17. In both cases, mpirun is launching a single daemon onto only one other
node - the only difference was in the node being used. The "n
I have try if another mpi process is running in the node already the
process run
$ricardo$ /opt/openmpi/bin/mpirun --mca plm_rsh_no_tree_spawn 1 -mca
plm_base_verbose 10 -host nexus16 ompi_info
[nexus10.nlroc:27397] mca: base: components_register: registering plm
components
[nexus10.nlroc:27397]