Re: [OMPI users] Open MPI performance on Amazon Cloud

2010-03-12 Thread Hammad Siddiqi
Dear All, Is this the correct forum for sending these kind of emails. please let me know if there is some other mailing list. Thank Best Regards, Hammad Siddiqi System Administrator, Centre for High Performance Scientific Computing, School of Electrical Engineering and Computer Science, National

[OMPI users] Open MPI performance on Amazon Cloud

2010-02-27 Thread Hammad Siddiqi
://hpc.seecs.nust.edu.pk/~hammad/OpenMPI,Latency-Bandwidth.jpg Please have a look on them. Is anyone else facing the same problem. Any guidance in this regard will highly be appreciated. Thank you. -- Best Regards, Hammad Siddiqi System Administrator, Centre for High Performance Scientific

Re: [OMPI users] OpenMPI Giving problems when using -mca btl mx, sm, self

2007-10-01 Thread Hammad Siddiqi
One more thing to add -mca mtl mx uses ethernet and IP emulation of Myrinet to my knowledge. I want to use Myrinet(not its IP Emulation) and shared memory simultaneously. Thanks Regards, Hammad Hammad Siddiqi wrote: Dear Tim, Your and Tim Matox's suggestion yielded following results

Re: [OMPI users] OpenMPI Giving problems when using -mca btl mx, sm, self

2007-10-01 Thread Hammad Siddiqi
for indefinite time. *2.4 /opt/SUNWhpc/HPC7.0/bin/mpirun -np 8 -mca mtl mx,sm,self -host "indus1,indus2,indus3,indus4" -mca pml cm -mca mtl_base_debug 1000 ./hello* This command hangs the machines for indefinite time. Please notice that running more than four mpi processes hangs t

Re: [OMPI users] OpenMPI Giving problems when using -mca btl mx, sm, self

2007-09-29 Thread Hammad Siddiqi
only using myrinet as interconnect. One more thing I cannot start more than 4 processes in this case, The mpirun process hangs. Any suggestions? Once again, thanks for your help. Regards, Hammad Terry Dontje wrote: Hi Hammad, It looks to me like none of the btl's could resolve a r

[OMPI users] OpenMPI Giving problems when using -mca btl mx, sm, self

2007-09-28 Thread Hammad Siddiqi
Hello, I am using Sun HPC Toolkit 7.0 to compile and run my C MPI programs. I have tested the myrinet installations using myricoms own test programs. The Myricom software stack I am using is MX and the vesrion is mx2g-1.1.7, mx_mapper is also used. We have 4 nodes having 8 dual core processors