Re: [OMPI users] few Problems

2009-04-24 Thread Jeff Squyres
On Apr 23, 2009, at 3:59 PM, Luis Vitorio Cargnini wrote: I'm using NFS, my home dir is the same in all nodes the problem is when generating the key it is been generated for a specific machine end of the key is the user@host, the system is consulting id_dsa in each machine. That's ok. I have

Re: [OMPI users] Open-MPI Presentation

2009-04-24 Thread Jeff Squyres
On Apr 23, 2009, at 3:27 PM, Alex Margolin wrote: I'm a grad student, working on a MPI communication optimization, and I've been working with Open-MPI for quite a while now. I'm also a TA in a course about open-source development, and I would like to present Open-MPI as a case study. In par

Re: [OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread Jeff Squyres
Per http://www.open-mpi.org/community/lists/announce/2009/03/0029.php, can you try upgrading to Open MPI v1.3.2? On Apr 24, 2009, at 5:21 AM, jan wrote: Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-24 Thread shan axida
Thank You Eugene Loh, It is very important for me to explain the spike at figure! But I dont know how to hunt the reason and how to check it. Would you please help me in more practically? Thank you again. From: Eugene Loh To: Open MPI Users Sent: Friday, Ap

[OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread jan
Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi I got the error message as job failed: Process 15 on node2 Process 6 on node1 Process 14 on node2 … … … Process 0 on node1 Process 10 on node

[OMPI users] running problem on Dell blade server, confirm 2d21ce3ce8be64d8104b3ad71b8c59e2514a72eb

2009-04-24 Thread jan
Dear Sir, I’m running a cluster with OpenMPI. $mpirun --mca mpi_show_mpi_alloc_mem_leaks 8 --mca mpi_show_handle_leaks 1 $HOME/test/cpi I got the error message as job failed: Process 15 on node2 Process 6 on node1 Process 14 on node2 … … … Process 0 on node1 Process 10 on nod

Re: [OMPI users] Launching MPI app manually when rsh/ssh can't beused...

2009-04-24 Thread Mariusz Mamoński
On Thu, Apr 23, 2009 at 9:03 PM, Jeff Squyres wrote: > On Apr 23, 2009, at 3:51 AM, Katz, Jacob wrote: > >> Is there a way to start up an MPI app by some manual procedure, when >> rsh/ssh cannot be used to log into a machine where part of the app should >> run? >> E.g. a set of commands that can b

Re: [OMPI users] MPI_Bcast from OpenMPI

2009-04-24 Thread Eugene Loh
Right.  So, baseline performance seems reasonable, but there is an odd spike that seems difficult to explain.  This is annoying, but again:  how important is it to resolve that mystery?  You can spend a few days trying to hunt this down, only to find that it's some oddity that has no general re