[OMPI users] configure is too smart !

2007-03-06 Thread Christian Simon
Dear developers, I "switched" from Lam-MPI to Open MPI recently. I am using MacOS X server on small clusters, previously with XLF/XLC on G5, now gfortran/gcc with Intels. Since users are used to Unix file systems, since most applications/ libraries compilations are not aware of HFS+ file s

Re: [OMPI users] Current working directory issue

2007-03-06 Thread Jeff Squyres
OMPI uses the getcwd() library call to determine the pwd, whereas the shell $PWD variable contains the shell's point of view of what the PWD is (I *suspect* that the pwd(1) shell command also uses getcwd(), but I don't know that for sure). From the OSX getcwd(3) man page: The getcwd()

Re: [OMPI users] Fortran90 interfaces--problem?

2007-03-06 Thread Jeff Squyres
On Mar 5, 2007, at 9:50 AM, Michael wrote: I have discovered a problem with the Fortran90 interfaces for all types of communication when one uses derived datatypes (I'm currently using openmpi-1.3a1r13918 [for testing] and openmpi-1.1.2 [for compatibility with an HPC system]), for example call

Re: [OMPI users] BLACS tests fails on IPF

2007-03-06 Thread Jeff Squyres
Sorry for the delay in replying -- we've been quite busy trying to get OMPI v1.2 out the door! Are you sure that you build BLACS properly with Open MPI? Check this FAQ item: http://www.open-mpi.org/faq/?category=mpi-apps#blacs In particular, note that there are items in Bmake.inc that

Re: [OMPI users] Fortran90 interfaces--problem?

2007-03-06 Thread Åke Sandgren
On Tue, 2007-03-06 at 09:51 -0500, Jeff Squyres wrote: > On Mar 5, 2007, at 9:50 AM, Michael wrote: > > > I have discovered a problem with the Fortran90 interfaces for all > > types of communication when one uses derived datatypes (I'm currently > > using openmpi-1.3a1r13918 [for testing] and open

Re: [OMPI users] performance question

2007-03-06 Thread Jeff Squyres
On Feb 19, 2007, at 1:53 PM, Mark Kosmowski wrote: [snipped good description of cluster] Sorry for the delay in replying -- traveling for a week-long OMPI developer meeting and trying to get v1.2 out the door has sucked up all of our time recently. :-( For just the one system with two

Re: [OMPI users] Fortran90 interfaces--problem?

2007-03-06 Thread Jeff Squyres
On Mar 6, 2007, at 10:23 AM, Åke Sandgren wrote: What about the "Fortran 2003 ISO_C_BINDING" couldn't a C_LOC be used here? (I probably don't know what i'm talking about but i just saw a reference to it.) FWIW, we wrote a paper about a proposed Fortran 2003 bindings that uses the ISO_C_BI

Re: [OMPI users] configure is too smart !

2007-03-06 Thread Brian Barrett
Sure, we can add a FAQ entry on that :). At present, configure decides whether Open MPI will be installed on a case sensitive file-system or not based on what the build file system does. Which is far from perfect, but covers 99.9% of the cases. You happen to be the .1%, but we do have an

Re: [OMPI users] MPI_Comm_Spawn

2007-03-06 Thread Rozzen . VINCONT
Hi Tim, I get back to you "What kind of system is it?" =>The system is a "Debian Sarge". "How many nodes are you running on?" => There is no cluster configured, so I guess I work with no node environnement. "Have you been able to try a more recent version of Open MPI?" =>Today, I tried with versio

[OMPI users] MPI_PACK very slow?

2007-03-06 Thread Michael
I have a section of code were I need to send 8 separate integers via BCAST. Initially I was just putting the 8 integers into an array and then sending that array. I just tried using MPI_PACK on those 8 integers and I'm seeing a massive slow down in the code, I have a lot of other communic

Re: [OMPI users] MPI_PACK very slow?

2007-03-06 Thread George Bosilca
I doubt this come from the MPI_Pack/MPI_Unpack. The difference is 137 seconds for 5 calls. That's basically 27 seconds by call to MPI_Pack, for packing 8 integers. I know the code and I'm affirmative there is no way to spend 27 seconds over there. Can you run your application using valgrind

Re: [OMPI users] configure is too smart !

2007-03-06 Thread Christian Simon
Brian Barrett wrote: specify --with-cs-fs or --without-cs-fs Unbeliveable ! Thanks again. -- Christian SIMON

Re: [OMPI users] MPI_Comm_Spawn

2007-03-06 Thread Ralph Castain
I believe I know what is happening here. My availability in the next week is pretty limited due to a family emergency, but I'll take a look when I get back. In brief, this is a resource starvation issue where the system thinks your node is unable to support any further processes and so it blocks.

Re: [OMPI users] MPI_PACK very slow?

2007-03-06 Thread Michael
I discovered I made a minor change that cost me dearly (I had thought I had tested this single change but perhaps didn't track the timing data closely). MPI_Type_creat_struct performs well only when all the data is continuous in memory (at least for OpenMPI 1.1.2). Is this normal or expe

Re: [OMPI users] MPI_PACK very slow?

2007-03-06 Thread George Bosilca
On Mar 6, 2007, at 4:51 PM, Michael wrote: MPI_Type_creat_struct performs well only when all the data is continuous in memory (at least for OpenMPI 1.1.2). There are always benefits for sending contiguous data, especially when the message is small. Packing and unpacking, are costly opera