We shouldn't just hang - that isn't right. Can you configure OMPI with 
--enable-debug and then add "-mca plm_base_verbose 5 -mca state_base_verbose 5" 
to your cmd line so we can see where it is hanging?


On Jan 1, 2014, at 1:48 AM, Siegmar Gross 
<siegmar.gr...@informatik.hs-fulda.de> wrote:

> In the past I could run a small program in a real heterogeneous
> system with little (sunpc1, linpc1) and big endian (rs0, tyr)
> machines.
> 
> tyr small_prog 101 ompi_info | grep MPI:
>                Open MPI: 1.6.6a1r29175
> tyr small_prog 102 mpiexec -np 3 -host rs0,sunpc1,linpc1 rank_size
> I'm process 1 of 3 available processes running on sunpc1.
> MPI standard 2.1 is supported.
> I'm process 0 of 3 available processes running on rs0.informatik.hs-fulda.de.
> MPI standard 2.1 is supported.
> I'm process 2 of 3 available processes running on linpc1.
> MPI standard 2.1 is supported.
> tyr small_prog 103 
> 
> 
> Now I get no output at all.
> 
> tyr small_prog 130 ompi_info | grep MPI:
>                Open MPI: 1.9a1r30100
> tyr small_prog 131 mpiexec -np 3 -host rs0,sunpc1,linpc1 rank_size
> tyr small_prog 132 mpiexec -np 3 -host rs0,sunpc1,linpc1 \
>  --hetero-nodes --hetero-apps rank_size
> tyr small_prog 133
> 
> 
> Perhaps this behaviour is intended, because Open MPI doesn't
> support little and big endian machines in the same cluster or
> virtual computer (I know only LAM-MPI which works in such an
> environment). On the other side: Does it make sense to run
> the command without any output, if it doesn't work (even if
> "mpiexec" returns "1")?

Reply via email to