Re: [OMPI users] problems with hostfile when doing MPMD

2008-04-13 Thread Ralph Castain
I believe this -should- work, but can't verify it myself. The most important thing is to be sure you built with --enable-heterogeneous or else it will definitely fail. Ralph On 4/10/08 7:17 AM, "Rolf Vandevaart" wrote: > > On a CentOS Linux box, I see the following: > >> grep 113 /usr/inclu

Re: [OMPI users] problems with hostfile when doing MPMD

2008-04-13 Thread Ralph Castain
Hi Jody Simple answer - the 1.2.x series does not support multiple hostfiles. I believe you will find that documented in the FAQ section. What you have to do here is have -one- hostfile that includes all the hosts, and then -host each app-context to indicate which of those hosts are to be used fo

Re: [OMPI users] Problems using Intel MKL with OpenMPI and Pathscale

2008-04-13 Thread Åke Sandgren
On Sun, 2008-04-13 at 08:00 -0400, Jeff Squyres wrote: > Do you get the same error if you disable the memory handling in Open > MPI? You can configure OMPI with: > > --disable-memory-manager Ah, I have apparently missed that config flag, will try on monday. -- Ake Sandgren, HPC2N, Umea

Re: [OMPI users] Troubles with MPI-IO Test and Torque/PVFS

2008-04-13 Thread Jeff Squyres
It looks like you're seg faulting when calling some flavor printf (perhaps vsnprintf?) in the make_error_messages() function. You might want to double check the read_write_file() function to see exactly what kind of error it is encountering such that it is calling report_errs(). On Apr 1

Re: [OMPI users] Problems using Intel MKL with OpenMPI and Pathscale

2008-04-13 Thread Jeff Squyres
Do you get the same error if you disable the memory handling in Open MPI? You can configure OMPI with: --disable-memory-manager On Apr 9, 2008, at 3:01 PM, Åke Sandgren wrote: Hi! I have an annoying problem that i hope someone here has some info on. I'm trying to build a code with Ope

Re: [OMPI users] Oversubscription performance problem

2008-04-13 Thread Jeff Squyres
Sorry for the delays in replying. The central problem is that Open MPI is much more aggressive about its message passing progress than LAM is -- it simply wasn't designed to share well as a mechanism to get as high performance as possible. mpi_yield_when_idle is most helpful only for certai