Re: [OMPI users] Memory allocation error when linking with MPI libraries

2010-08-08 Thread Terry Frankcombe
You're trying to do a 6GB allocate. Can your underlying system handle that? IF you compile without the wrapper, does it work? I see your executable is using the OMPI memory stuff. IIRC there are switches to turn that off. On Fri, 2010-08-06 at 15:05 +0200, Nicolas Deladerriere wrote: > Hello,

Re: [OMPI users] Memory allocation error when linking with MPI libraries

2010-08-08 Thread Nicolas Deladerriere
Yes, I'am using 24G machine on 64 bit Linux OS. If I compile without wrapper, I did not get any problems. It seems that when I am linking with openmpi, my program use a kind of openmpi implemented malloc. Is it possible to switch it off in order ot only use malloc from libc ? Nicolas 2010/8/8 Te

Re: [OMPI users] Memory allocation error when linking with MPI libraries

2010-08-08 Thread Nysal Jan
What interconnect are you using? Infiniband? Use "--without-memory-manager" option while building ompi in order to disable ptmalloc. Regards --Nysal On Sun, Aug 8, 2010 at 7:49 PM, Nicolas Deladerriere < nicolas.deladerri...@gmail.com> wrote: > Yes, I'am using 24G machine on 64 bit Linux OS. >

[OMPI users] Fortran code generation error on 1.5 rc5

2010-08-08 Thread Damien
Hi all, There's a code generation bug in the CMake/Visual Studio build of rc 5 on VS 2008. A release build, with static libs, F77 and F90 support gives an error at line 91 in mpif-config.h : parameter (MPI_STATUS_SIZE=) This obviously makes the compiler unhappy. In older trunk builds this

Re: [OMPI users] MPI_Bcast issue

2010-08-08 Thread Randolph Pullen
Thanks,  although “An intercommunicator cannot be used for collective communication.” i.e ,  bcast calls., I can see how the MPI_Group_xx calls can be used to produce a useful group and then communicator;  - thanks again but this is really the side issue to my main question about MPI_Bcast. I s

Re: [OMPI users] MPI_Bcast issue

2010-08-08 Thread Ralph Castain
Hi Randolph Unless your code is doing a connect/accept between the copies, there is no way they can cross-communicate. As you note, mpirun instances are completely isolated from each other - no process in one instance can possibly receive information from a process in another instance because i