Re: [OMPI users] Debugging memory use of Open MPI

2009-04-09 Thread Eugene Loh
Shaun Jackman wrote: Eugene Loh wrote: I'm no expert, but I think it's something like this: 1) If the messages are short, they're sent over to the receiver. If the receiver does not expect them (no MPI_Irecv posted), it buffers them up. 2) If the messages are long, only a little bit is s

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-09 Thread Shaun Jackman
Eugene Loh wrote: I'm no expert, but I think it's something like this: 1) If the messages are short, they're sent over to the receiver. If the receiver does not expect them (no MPI_Irecv posted), it buffers them up. 2) If the messages are long, only a little bit is sent over to the receiver

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-09 Thread Shaun Jackman
Eugene Loh wrote: I'm no expert, but I think it's something like this: 1) If the messages are short, they're sent over to the receiver. If the receiver does not expect them (no MPI_Irecv posted), it buffers them up. 2) If the messages are long, only a little bit is sent over to the receiver

[OMPI users] Problems configuring OpenMPI 1.3.1 with numa, torque, and openib

2009-04-09 Thread Gus Correa
Dear OpenMPI experts I had some trouble to build OpenMPI 1.3.1 on a CentOS 5.2 Linux x86_64 computer. The configure scripts seem to have changed, and work different than before, particularly w.r.t. additional libraries like numa, torque, and openib. The new behavior can be a bit unexpected and p

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-09 Thread Eugene Loh
Shaun Jackman wrote: When running my Open MPI application, I'm seeing three processors that are using five times as much memory as the others when they should all use the same amount of memory. To start the debugging process, I would like to know if it's my application or the Open MPI library

[OMPI users] Debugging memory use of Open MPI

2009-04-09 Thread Shaun Jackman
When running my Open MPI application, I'm seeing three processors that are using five times as much memory as the others when they should all use the same amount of memory. To start the debugging process, I would like to know if it's my application or the Open MPI library that's using the addit

Re: [OMPI users] shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-09 Thread Gus Correa
Hi Francesco Francesco Pietra wrote: Hi: As failure to find "limits.h" in my attempted compilations of Amber of th fast few days (amd64 lenny, openmpi 1.3.1, intel compilers 10.1.015) is probably (or I hope so) a bug of the version used of intel compilers (I made with debian the same observation

Re: [OMPI users] I encoutered Bus Error while running openMPI on IB.

2009-04-09 Thread Ralph Castain
--with-tm=/usr/local/torque2.1.8/ If you are using rsh, then why did you specify to build with Torque? The deprecated parameter is just a warning and shouldn't cause a problem. The current param would be to specify -mca plm_rsh_agent rsh. No big deal, though - either will work. You migh

[OMPI users] I encoutered Bus Error while running openMPI on IB.

2009-04-09 Thread Tsung Han Shie
Hi all, I've build an openmpi-1.3.1 binary on a machine with openSUSE 10.2, gcc 4.1.2。 The cluster I tried to run jobs on uses Silverstorm infiniband system and uses ibverbs as its IB driver. The configure parameters listed below: ./configure --prefix=/d/thshie/vp31/openmpi/ --with-openib=/usr/lo

[OMPI users] shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-09 Thread Francesco Pietra
Hi: As failure to find "limits.h" in my attempted compilations of Amber of th fast few days (amd64 lenny, openmpi 1.3.1, intel compilers 10.1.015) is probably (or I hope so) a bug of the version used of intel compilers (I made with debian the same observations reported for gentoo, http://software.

Re: [OMPI users] mpirun self,sm

2009-04-09 Thread Ralph Castain
hostname never calls MPI_Init, and therefore never initializes the BTL subsystem. Therefore, hostname will always run correctly. mpirun is not an MPI process, nor are the daemons used by OMPI to launch the MPI job. Thus, they also do not call MPI_Init, and therefore do not see the BTL subsy

[OMPI users] mpirun self,sm

2009-04-09 Thread Robert Kubrick
How is this possible? dx:~> mpirun -v -np 2 --mca btl self,sm --host dx,sx hostname dx sx dx:~> netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX- OVR Flg eth0 1500 0 998755 0 0 0 1070323 0 0 0 BMRU eth1