[OMPI users] large number of processes

2007-12-03 Thread hbtcx243
I installed Open MPI 1.2.4 in Red Hat Enterprise Linux 3. It worked fine in normal usage. I tested executions with extremely many processes. $ mpiexec -n 128 --host node0 --mca btl_tcp_if_include eth0 --mca mpool_sm_max_size 2147483647 ./cpi $ mpiexec -n 256 --host node0,node1 --mca btl_tcp_if_in

Re: [OMPI users] Suggestions on multi-compiler/multi-mpi build?

2007-12-03 Thread Katherine Holcomb
On Fri, 2007-11-30 at 19:05 -0800, Jim Kusznir wrote: > Thank you for your response! > > Just to clarify some things for my understanding: > > Do users load a single module that specifies both compiler and mpi > version (as opposed to loading two different modules, one for > complier, and one for

Re: [OMPI users] large number of processes

2007-12-03 Thread Rolf vandeVaart
Hi: I managed to run a 256 process job on a single node. I ran a simple test in which all processes send a message to all others. This was using Sun's Binary Distribution of Open MPI on Solaris which is based on r16572 of the 1.2 branch. The machine had 8 cores. burl-ct-v40z-0 49 =>/opt/SUNW

[OMPI users] OpenIB BTL broken on ompi-trunk?

2007-12-03 Thread Jon Mason
I'm seeing a crash in the openib btl on ompi-trunk when running any tests (whether running my own programs or generic ones). For example, when running IMB pingpong I get the following: $ mpirun --n 2 --host vic12,vic20 -mca btl openib,self # /usr/mpi/gcc/openmpi-trunk/tests/IMB-2.3/IMB-MPI1 pingp

Re: [OMPI users] OpenIB BTL broken on ompi-trunk?

2007-12-03 Thread Jon Mason
On Mon, Dec 03, 2007 at 02:44:37PM -0600, Jon Mason wrote: > I'm seeing a crash in the openib btl on ompi-trunk when running any > tests (whether running my own programs or generic ones). For example, > when running IMB pingpong I get the following: > > $ mpirun --n 2 --host vic12,vic20 -mca btl

[OMPI users] Using mtrace with openmpi segfaults

2007-12-03 Thread Jeffrey M Ceason
Having trouble using mtrace with openmpi. Whenever I use the mtrace call before or after MPI_Init the application terminates. This only seems to happen using mpi. Is there a way to disable the open-mpi memory wrappers? Is there known issues with users applications using mallopts and the mal