Re: [OMPI users] valgrind slaves in singleton mode

2012-11-16 Thread Tom Bryan (tombry)
> If I want to run valgrind on my processes, what steps should be > taken? I'm currently more interested in running valgrind on the > slave processes. I've never done it, but have you looked at the following FAQs? http://www.open-mpi.org/faq/?category=debugging ---Tom

Re: [OMPI users] mpi test program "ring" failed: blocked at MPI_Send

2012-09-25 Thread Tom Bryan (tombry)
On 9/25/12 9:10 AM, "Jeff Squyres (jsquyres)" wrote: >>problem, so i fixed it using "--mca btl_tcp_if_include bond0" because I >>know this is the high speed network interface I should use on each node. > >Glad it works for you! > >If you're not using those interfaces (they might be related to Xen

Re: [OMPI users] sge tight intregration leads to bad allocation

2012-04-03 Thread Tom Bryan
How are you launching the application? I had an app that did an Spawn_multiple with tight SGE integration, and there was a difference in behavior depending on whether or not an app was launched via mpiexec. I¹m not sure whether it¹s the same issue as you¹re seeing, but Reuti describes the problem

Re: [OMPI users] mpicc command not found - Fedora

2012-03-29 Thread Tom Bryan
And if ³which mpicc² doesn¹t find the executable, you could try rpmquery -l openmpi and rpmquery -l openmpi-devel Do you see mpicc? Is its parent directory in your PATH? ---Tom On 3/29/12 8:33 AM, "Hameed Alzahrani" wrote: > Hi, > > When you type "which mpicc" does the system return the corr

Re: [OMPI users] How to check that open MPI installed and work correctly

2012-03-26 Thread Tom Bryan
On 3/25/12 5:22 PM, "Hameed Alzahrani" wrote: > I installed open MPI on Linux cluster which consist of three machines. I want > to ask how can I check that open MPI work correctly and is there a special > configurations that I need to set to make the machines connect to each other > because I jus

Re: [OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-02-08 Thread Tom Bryan
On 2/8/12 4:52 PM, "Tom Bryan" wrote: > Got it. Unfortunately, we *definitely* need THREAD_MULTIPLE in our case. > I rebuilt my code against 1.4.4. > > When I run my test "e" from before, which is basically just > mpiexec -np 1 ./mpitest > I get the f

Re: [OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-02-08 Thread Tom Bryan
On 2/6/12 5:10 PM, "Reuti" wrote: > Am 06.02.2012 um 22:28 schrieb Tom Bryan: > >> On 2/6/12 8:14 AM, "Reuti" wrote: >> >>>> If I need MPI_THREAD_MULTIPLE, and openmpi is compiled with thread support, >>>> it's not cl

Re: [OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-02-06 Thread Tom Bryan
On 2/6/12 8:14 AM, "Reuti" wrote: >> If I need MPI_THREAD_MULTIPLE, and openmpi is compiled with thread support, >> it's not clear to me whether MPI::Init_Thread() and >> MPI::Inint_Thread(MPI::THREAD_MULTIPLE) would give me the same behavior from >> Open MPI. > > If you need thread support, you

Re: [OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-02-03 Thread Tom Bryan
OK. Sorry for the delay. I needed to read through this thread a few times and try some experiments. Let me reply to a few of these pieces, and then I'll talk about those experiments. On 1/31/12 9:26 AM, "Reuti" wrote: >>> I never used spawn_mutiple, but isn't it necessary to start it with mpi

Re: [OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-01-30 Thread Tom Bryan
On 1/29/12 5:44 PM, "Reuti" wrote: > you compiled Open MPI --with-sge I assume, as the above is working - fine. Yes, we compiled --with-sge. >> #$ -pe orte 1- > > This number should match the processes you want to start plus one the master. > Otherwise SGE might refuse to start a process on a

[OMPI users] Spawn_multiple with tight integration to SGE grid engine

2012-01-27 Thread Tom Bryan
I am in the process of setting up a grid engine (SGE) cluster for running Open MPI applications. I'll detail the set up below, but my current problem is that this call to Span_multiple never seems to return. // Spawn all of the children processes. _intercomm = MPI::COMM_WORLD.Spawn_multiple( _nPr