Hi,
On Monday 01 October 2007 03:08:04 am Hammad Siddiqi wrote:
> One more thing to add -mca mtl mx uses ethernet and IP emulation of
> Myrinet to my knowledge. I want to use Myrinet(not its IP Emulation)
> and shared memory simultaneously.
This is not true (as far as I know...). Open MPI has 2 di
One more thing to add -mca mtl mx uses ethernet and IP emulation of
Myrinet to my knowledge. I want to use Myrinet(not its IP Emulation)
and shared memory simultaneously.
Thanks
Regards, Hammad
Hammad Siddiqi wrote:
Dear Tim,
Your and Tim Matox's suggestion yielded following results,
*1. /
Dear Tim,
Your and Tim Matox's suggestion yielded following results,
*1. /opt/SUNWhpc/HPC7.0/bin/mpirun -np 2 -mca btl mx,sm,self -host
"indus1,indus2" -mca btl_base_debug 1000 ./hello*
/opt/SUNWhpc/HPC7.0/bin/mpirun -np 4 -mca btl mx,sm,self -host
"indus1,indus2,indus3,indus4" -mca btl_b
To use Tim Prins 2nd suggestion, you would also need to add "-mca pml cm" to
the runs with "-mca mtl mx".
On 9/29/07, Tim Prins wrote:
> I would reccommend trying a few things:
>
> 1. Set some debugging flags and see if that helps. So, I would try something
> like:
> /opt/SUNWhpc/HPC7.0/bin/mpiru
I would reccommend trying a few things:
1. Set some debugging flags and see if that helps. So, I would try something
like:
/opt/SUNWhpc/HPC7.0/bin/mpirun -np 2 -mca btl
mx,self -host "indus1,indus2" -mca btl_base_debug 1000 ./hello
This will output information as each btl is loaded, and whethe
Hi Terry,
Thanks for replying. The following command is working fine:
/opt/SUNWhpc/HPC7.0/bin/mpirun -np 4 -mca btl tcp,sm,self -machinefile
machines ./hello
The contents of machines are:
indus1
indus2
indus3
indus4
I have tried using np=2 over pairs of machines, but the problem is same.
Th
Hi Hammad,
It looks to me like none of the btl's could resolve a route between the
node that process rank 0 is on to the other nodes.
I would suggest trying np=2 over a couple pairs of machines to see if
that works and you can truly be sure that only the
first node is having this problem.
It
Hello,
I am using Sun HPC Toolkit 7.0 to compile and run my C MPI programs.
I have tested the myrinet installations using myricoms own test programs.
The Myricom software stack I am using is MX and the vesrion is
mx2g-1.1.7, mx_mapper is also used.
We have 4 nodes having 8 dual core processors