Re: [OMPI users] Problems running openmpi under os x

2007-07-12 Thread Tim Cornwell


Brian,

I think it's just a symbol clash. A test program linked with just  
mpicxx works fine but with our typical link, it fails. I've narrowed  
the problem down to a single shared library. This is from C++ and the  
symbols have a namespace casa. Weeding out all the the casa stuff and  
some other cruft, we're left with:


0009df14 T QuantaProxy::fits()
0011277c S int __gnu_cxx::__capture_isnan(double)
0014b4ae S std::invalid_argument::~invalid_argument()
0014b48e S std::invalid_argument::~invalid_argument()
00112790 S int std::isnan(double)
001200e8 S void** std::fill_n(void**,  
unsigned int, void* const&)
0012da12 S std::complex* std::fill_n*,  
unsigned int, std::complex >(std::complex*, unsigned  
int, std::complex const&)
0012d9ae S std::complex* std::fill_n*,  
unsigned int, std::complex >(std::complex*, unsigned  
int, std::complex const&)
00104a4c S bool* std::fill_n(bool*,  
unsigned int, bool const&)
0010b126 S double* std::fill_n 
(double*, unsigned int, double const&)
0012043a S float* std::fill_n(float*,  
unsigned int, float const&)
00120386 S int* std::fill_n(int*, unsigned  
int, int const&)
001203e0 S unsigned int* std::fill_nunsigned int>(unsigned int*, unsigned int, unsigned int const&)
00120322 S short* std::fill_n(short*,  
unsigned int, short const&)
0012d94a S unsigned short* std::fill_nunsigned short>(unsigned short*, unsigned int, unsigned short const&)
00112bf6 S void std::__reverse<__gnu_cxx::__normal_iteratorstd::basic_string, std::allocator  
> > >(__gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >,  
__gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >,  
std::random_access_iterator_tag)
00112bbc S __gnu_cxx::__normal_iteratorstd::basic_string, std::allocator  
> > std::transform<__gnu_cxx::__normal_iteratorstd::basic_string, std::allocator  
> >, __gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >, int (*)(int)> 
(__gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >,  
__gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >,  
__gnu_cxx::__normal_iteratorstd::char_traits, std::allocator > >, int (*)(int))

00198740 S typeinfo for std::invalid_argument
00192cac S typeinfo name for std::invalid_argument
001993e0 S vtable for std::invalid_argument


We're all using the standard of OS X:

$ mpicxx -v
Using built-in specs.
Target: i686-apple-darwin8
Configured with: /private/var/tmp/gcc/gcc-5367.obj~1/src/configure -- 
disable-checking -enable-werror --prefix=/usr --mandir=/share/man -- 
enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg] 
[^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with- 
slibdir=/usr/lib --build=powerpc-apple-darwin8 --with-arch=nocona -- 
with-tune=generic --program-prefix= --host=i686-apple-darwin8 -- 
target=i686-apple-darwin8

Thread model: posix
gcc version 4.0.1 (Apple Computer, Inc. build 5367)

Tim



On 12/07/2007, at 7:57 AM, Brian Barrett wrote:


That's unexpected.  If you run the command 'ompi_info --all', it
should list (towards the top) things like the Bindir and Libdir.  Can
you see if those have sane values?  If they do, can you try running a
simple hello, world type MPI application (there's one in the OMPI
tarball).  It almost looks like memory is getting corrupted, which
would be very unexpected that early in the process.  I'm unable to
duplicate the problem with 1.2.3 on my Mac Pro, making it all the
more strange.

Another random thought -- Which compilers did you use to build Open  
MPI?


Brian


On Jul 11, 2007, at 1:27 PM, Tim Cornwell wrote:



 Open MPI: 1.2.3
Open MPI SVN revision: r15136
 Open RTE: 1.2.3
Open RTE SVN revision: r15136
 OPAL: 1.2.3
OPAL SVN revision: r15136
   Prefix: /usr/local
  Configured architecture: i386-apple-darwin8.10.1

Hi Brian,

1.2.3 downloaded and built from source.

Tim

On 12/07/2007, at 12:50 AM, Brian Barrett wrote:


Which version of Open MPI are you using?

Thanks,

Brian

On Jul 11, 2007, at 3:32 AM, Tim Cornwell wrote:



I have a problem running openmpi under OS 10.4.10. My program runs
fine under debian x86_64 on an opteron but under OS X on a number
of Mac Book and Mac Book Pros, I get the following immediately on
startup. This smells like a common problem but I could find
anything relevant anywhere. Can anyone provide a hint or better yet
a solution?

Thanks,

Tim


Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_PROTECTION_FAILURE at address: 0x000c
0x04510412 in free ()
(gdb) where
#0  0x04510412 in free ()
#1  0x05d24f80 in opal_install_dirs_expand (input=0x5d2a6b0 "$
{prefix}") at base/installdirs_base_expand.c:67
#2  0x05d24584 in opal_installdirs_base_open () at base/
installdirs_base_components.c:94
#3  0x05d01a40 in opal_init_util () at runtime/opal_init.c:150
#4  0x05d01b24 in opal_init () at runtime/opal_init.c:200
#5  0x051fa5cd in ompi_mpi_init (argc=1, argv=0xbfffde74

[OMPI users] Windows Build

2007-07-12 Thread Tisham Dhar
Hi all,
 
I am looking for visual studio or mingw build files for Open-MPI any one
with these working please let me know.
 
Regards,
Tishampati Dhar

 Software Developer

 APOGEE IMAGING INTERNATIONAL

 Building 12B

 1 Adelaide - Lobethal Road 

 Lobethal SA 5241 

  Telephone: +61 - 8 - 8389 5499

 Fax: +61 - 8 - 8389 5488

 Mobile: +61 - 406114165

 ~~

"The information in this e-mail may be confidential and/or commercially
privileged. It is intended solely for the addressee. Access to this
e-mail by anyone else is unauthorised. If you are not the
intendedrecipient, any disclosure, copying, distribution or action taken
or omitted to be taken in reliance on it, is prohibited and may be
unlawful."

 


[OMPI users] read send buffer before a send operation completes

2007-07-12 Thread Isaac Huang

It's prohibited by the standard to read send buffer before a send
operation completes, and I understand the theoretical rationale behind
it.

I'm currently layering a protocol stack on top of MPI, and this
protocol allows a buffer to be read by multiple peers concurrently.
Thus for strict conformance I can either serialize read access to such
buffers or just make memory copies - neither approach is optimal. I'm
wondering that whether there is any practical exploitation of this
restriction - Is there an Open MPI BTL driver (or whatever it is
called in the Open MPI architecture) that really unmaps pages that are
totally contained inside the communication buffer from the application
address space while doing DMA?

If the answer is none or very rare, then I'll perhaps just dismiss it
as a red herring and big headache can be avoided.

Thanks,
Isaac

--

Regards, Isaac
()  ascii ribbon campaign - against html e-mail
/\- against microsoft attachments


Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-12 Thread Ricardo Reis

On Wed, 11 Jul 2007, Jeff Squyres wrote:


LAM uses C++ for the laminfo command and its wrapper compilers (mpicc
and friends).  Did you use those successfully?


yes, no problem.

attached out from laminfo -all
  strace laminfo

 greets,

 Ricardo Reis

 'Non Serviam'

 PhD student @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 

 &

 Cultural Instigator @ Rádio Zero
 http://radio.ist.utl.pt

laminfostrace.bz2
Description: Binary data


laminfoall.bz2
Description: Binary data


Re: [OMPI users] Windows Build

2007-07-12 Thread Shiqing Fan

Hello Tisham,

Currently the Open-MPI implementation for windows is still not 
finalized,  HLRS and UTK are working together on this project. We have 
made solution and project files for Visual Studio 2005, and need more 
test and modification.


If you need more information about this, please just contact us.

Regards,
Shiqing Fan


Hi all,
 
I am looking for visual studio or mingw build files for Open-MPI any 
one with these working please let me know.
 
Regards,


Tishampati Dhar

 Software Developer

 APOGEE IMAGING INTERNATIONAL

 Building 12B

 1 Adelaide - Lobethal Road

 Lobethal SA 5241

  Telephone: +61 - 8 - 8389 5499

 Fax: +61 - 8 - 8389 5488

 Mobile: +61 - 406114165

 ~~

"The information in this e-mail may be confidential and/or 
commercially privileged. It is intended solely for the addressee. 
Access to this e-mail by anyone else is unauthorised. If you are not 
the  intendedrecipient, any disclosure, copying, distribution or 
action taken or omitted to be taken in reliance on it, is prohibited 
and may be  unlawful."


 



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
<>

Re: [OMPI users] read send buffer before a send operation completes

2007-07-12 Thread Jeff Squyres
At the moment, I believe that this should be safe in Open MPI (read a  
buffer that is currently being sent).  No claims about the future,  
though.  :-)



On Jul 12, 2007, at 1:07 AM, Isaac Huang wrote:


It's prohibited by the standard to read send buffer before a send
operation completes, and I understand the theoretical rationale behind
it.

I'm currently layering a protocol stack on top of MPI, and this
protocol allows a buffer to be read by multiple peers concurrently.
Thus for strict conformance I can either serialize read access to such
buffers or just make memory copies - neither approach is optimal. I'm
wondering that whether there is any practical exploitation of this
restriction - Is there an Open MPI BTL driver (or whatever it is
called in the Open MPI architecture) that really unmaps pages that are
totally contained inside the communication buffer from the application
address space while doing DMA?

If the answer is none or very rare, then I'll perhaps just dismiss it
as a red herring and big headache can be avoided.

Thanks,
Isaac

--

Regards, Isaac
()  ascii ribbon campaign - against html e-mail
/\- against microsoft attachments
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] mpi with icc, icpc and ifort :: segfault (Jeff Squyres)

2007-07-12 Thread Jeff Squyres

I admit to being baffled.  :-(

If general C++ applications seem to be working with icc/icpc, I do  
not know why OMPI would fail for you with icc/icpc (especially while  
accessing stack memory).  What version of icc/icpc are you using?   
There were some bugs in the 8.x series that caused problems, IIRC...


Do the intel compilers come with any error checking tools to give  
more diagnostics?




On Jul 12, 2007, at 3:28 AM, Ricardo Reis wrote:


On Wed, 11 Jul 2007, Jeff Squyres wrote:


LAM uses C++ for the laminfo command and its wrapper compilers (mpicc
and friends).  Did you use those successfully?


yes, no problem.

attached out from laminfo -all
  strace laminfo

 greets,

 Ricardo Reis

 'Non Serviam'

 PhD student @ Lasef
 Computational Fluid Dynamics, High Performance Computing, Turbulence
 

 &

 Cultural Instigator @ Rádio Zero
 http://radio.ist.utl.pt


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems




[OMPI users] Correction to FAQ: How do I build BLACS with Open MPI?

2007-07-12 Thread Michael
In the FAQ , section  
labeled:


12. How do I build BLACS with Open MPI?

INTFACE = -Df77IsF2C

That INTFACE value is only for G77, G95, and related compilers.

For the Intel Fortran compiler it is: -DAdd_


I have successfully built the combination of OpenMPI 1.2.3, ATLAS,  
BLACS, ScalaPack, and MUMPS using the Intel Fortran compiler on two  
different Debian Linux systems (3.0r3 on AMD Opterons and 4.0r0 on  
Intel Woodcrest/MacPro).


Michael



Re: [OMPI users] Correction to FAQ: How do I build BLACS with Open MPI?

2007-07-12 Thread Jeff Squyres

On Jul 12, 2007, at 2:28 PM, Michael wrote:


In the FAQ , section
labeled:

12. How do I build BLACS with Open MPI?

INTFACE = -Df77IsF2C

That INTFACE value is only for G77, G95, and related compilers.



For the Intel Fortran compiler it is: -DAdd_


Really?  I always thought that this flag discussed how to convert F77  
MPI handles to C handles (some MPI implementations use integers for  
MPI handles in C, so there's no conversion necessary, but LAM and  
Open MPI use pointers, so using the MPI_*_f2c() functions are  
necessary).  Hence, it's not specific to a given fortran compiler.


But I could be completely misunderstanding this value...

UTK: can you confirm/deny both of these values?  (I do not claim to  
be a BLACS expert...)



I have successfully built the combination of OpenMPI 1.2.3, ATLAS,
BLACS, ScalaPack, and MUMPS using the Intel Fortran compiler on two
different Debian Linux systems (3.0r3 on AMD Opterons and 4.0r0 on
Intel Woodcrest/MacPro).

Michael

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Can't get TotalView to find main program

2007-07-12 Thread Dennis McRitchie
Thanks for the reply Jeff.

Yes, I did compile my test app with -g, but unfortunately, our rpm build
process stripped the symbols from orterun, so that turned out to be the
culprit. Once we fixed that and used openmpi-totalview.tcl to start
things up, TotalView debugging started working.

Unfortunately, I still can't get the TotalView message queue feature to
work. The option is greyed out, probably because I got the following
error, once for every process:

In process mpirun.N: Failed to find the global symbol
MPID_recvs

where uname_test.intel is my test app, and N is the process' rank. Note
that I get the same error whether I built openmpi and my test app with
the Intel compiler or the gcc compiler.

In looking in /ompi/debuggers, I see that the error is
coming out of ompi_dll.c, and it caused by not finding either
"mca_pml_base_send_requests" or "mca_pml_base_recv_requests" in the
image. I presume that the image in question is either orterun or my test
app, and if I run the strings command against them, unsurprisingly I do
not find either of these strings.

But if I compile the same test app against the MPICH library, I *can*
use TotalView's message queue feature with it. So I think the problem is
not with the test app itself.

Is there anything I need to do to enable the viewing of message queues
with TV when using openmpi 1.2.3?

Thanks,
   Dennis

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Jeff Squyres
Sent: Monday, July 09, 2007 10:06 AM
To: Open MPI Users
Subject: Re: [OMPI users] Can't get TotalView to find main program

On Jul 5, 2007, at 4:02 PM, Dennis McRitchie wrote:

> Any idea why the main program can't be found when running under 
> mpirun?

Just to be sure: you compiled your test MPI application with -g, right?

> Does openmpi need to be built with either --enable-debug or 
> --enable-mem-debug? The "configure --help" says the former is not for 
> general MPI users. Unclear about the latter.

No, both of those should be just for OMPI developers; you should not
need them for user installations.  Indeed, OMPI should build itself with
-g as relevant for TV support (i.e., use -g to compile the relevant .c
files in libmpi); you shouldn't need to build OMPI itself with -g.

--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Can't get TotalView to find main program

2007-07-12 Thread George Bosilca

Dennis,

The message queue feature is not yet available on the 1.2.3. One  
should use the latest version (from svn trunk or from the nightly  
builds) in order to get it. I'll make sure it get included in 1.2.4  
if we release it before the 1.3.


  Thanks,
george.

On Jul 12, 2007, at 3:34 PM, Dennis McRitchie wrote:


Thanks for the reply Jeff.

Yes, I did compile my test app with -g, but unfortunately, our rpm  
build
process stripped the symbols from orterun, so that turned out to be  
the

culprit. Once we fixed that and used openmpi-totalview.tcl to start
things up, TotalView debugging started working.

Unfortunately, I still can't get the TotalView message queue  
feature to

work. The option is greyed out, probably because I got the following
error, once for every process:

In process mpirun.N: Failed to find the global  
symbol

MPID_recvs

where uname_test.intel is my test app, and N is the process' rank.  
Note

that I get the same error whether I built openmpi and my test app with
the Intel compiler or the gcc compiler.

In looking in /ompi/debuggers, I see that the  
error is

coming out of ompi_dll.c, and it caused by not finding either
"mca_pml_base_send_requests" or "mca_pml_base_recv_requests" in the
image. I presume that the image in question is either orterun or my  
test
app, and if I run the strings command against them, unsurprisingly  
I do

not find either of these strings.

But if I compile the same test app against the MPICH library, I *can*
use TotalView's message queue feature with it. So I think the  
problem is

not with the test app itself.

Is there anything I need to do to enable the viewing of message queues
with TV when using openmpi 1.2.3?

Thanks,
   Dennis

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-bounces@open- 
mpi.org] On

Behalf Of Jeff Squyres
Sent: Monday, July 09, 2007 10:06 AM
To: Open MPI Users
Subject: Re: [OMPI users] Can't get TotalView to find main program

On Jul 5, 2007, at 4:02 PM, Dennis McRitchie wrote:


Any idea why the main program can't be found when running under
mpirun?


Just to be sure: you compiled your test MPI application with -g,  
right?



Does openmpi need to be built with either --enable-debug or
--enable-mem-debug? The "configure --help" says the former is not for
general MPI users. Unclear about the latter.


No, both of those should be just for OMPI developers; you should not
need them for user installations.  Indeed, OMPI should build itself  
with

-g as relevant for TV support (i.e., use -g to compile the relevant .c
files in libmpi); you shouldn't need to build OMPI itself with -g.

--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Correction to FAQ: How do I build BLACS with Open MPI?

2007-07-12 Thread George Bosilca
The INTFACE is for the namespace interface in order to allow the  
Fortran code to call a C function. So it should be dependent on the  
compiler. Btw, for some reasons I was quite sure we generate all 4  
versions of the Fortran interface ... If this is true is doesn't  
really mater what you have in the INTFACE.


The option Jeff is refering to is the TRANSCOMM define. It allow  
BLACS to know how to convert between Fortran and C handlers. For Open  
MPI this should be set to -DUseMpi2.


  Thanks,
george.

On Jul 12, 2007, at 2:41 PM, Jeff Squyres wrote:


On Jul 12, 2007, at 2:28 PM, Michael wrote:


In the FAQ , section
labeled:

12. How do I build BLACS with Open MPI?

INTFACE = -Df77IsF2C

That INTFACE value is only for G77, G95, and related compilers.



For the Intel Fortran compiler it is: -DAdd_


Really?  I always thought that this flag discussed how to convert F77
MPI handles to C handles (some MPI implementations use integers for
MPI handles in C, so there's no conversion necessary, but LAM and
Open MPI use pointers, so using the MPI_*_f2c() functions are
necessary).  Hence, it's not specific to a given fortran compiler.

But I could be completely misunderstanding this value...

UTK: can you confirm/deny both of these values?  (I do not claim to
be a BLACS expert...)


I have successfully built the combination of OpenMPI 1.2.3, ATLAS,
BLACS, ScalaPack, and MUMPS using the Intel Fortran compiler on two
different Debian Linux systems (3.0r3 on AMD Opterons and 4.0r0 on
Intel Woodcrest/MacPro).

Michael

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Can't get TotalView to find main program

2007-07-12 Thread Dennis McRitchie
Thanks George.

That will be very helpful.

Dennis 

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of George Bosilca
Sent: Thursday, July 12, 2007 4:35 PM
To: Open MPI Users
Subject: Re: [OMPI users] Can't get TotalView to find main program

Dennis,

The message queue feature is not yet available on the 1.2.3. One should
use the latest version (from svn trunk or from the nightly
builds) in order to get it. I'll make sure it get included in 1.2.4 if
we release it before the 1.3.

   Thanks,
 george.

On Jul 12, 2007, at 3:34 PM, Dennis McRitchie wrote:

> Thanks for the reply Jeff.
>
> Yes, I did compile my test app with -g, but unfortunately, our rpm 
> build process stripped the symbols from orterun, so that turned out to

> be the culprit. Once we fixed that and used openmpi-totalview.tcl to 
> start things up, TotalView debugging started working.
>
> Unfortunately, I still can't get the TotalView message queue feature 
> to work. The option is greyed out, probably because I got the 
> following error, once for every process:
>
> In process mpirun.N: Failed to find the global 
> symbol MPID_recvs
>
> where uname_test.intel is my test app, and N is the process' rank.  
> Note
> that I get the same error whether I built openmpi and my test app with

> the Intel compiler or the gcc compiler.
>
> In looking in /ompi/debuggers, I see that the error 
> is coming out of ompi_dll.c, and it caused by not finding either 
> "mca_pml_base_send_requests" or "mca_pml_base_recv_requests" in the 
> image. I presume that the image in question is either orterun or my 
> test app, and if I run the strings command against them, 
> unsurprisingly I do not find either of these strings.
>
> But if I compile the same test app against the MPICH library, I *can* 
> use TotalView's message queue feature with it. So I think the problem 
> is not with the test app itself.
>
> Is there anything I need to do to enable the viewing of message queues

> with TV when using openmpi 1.2.3?
>
> Thanks,
>Dennis
>
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-bounces@open- mpi.org] 
> On Behalf Of Jeff Squyres
> Sent: Monday, July 09, 2007 10:06 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Can't get TotalView to find main program
>
> On Jul 5, 2007, at 4:02 PM, Dennis McRitchie wrote:
>
>> Any idea why the main program can't be found when running under 
>> mpirun?
>
> Just to be sure: you compiled your test MPI application with -g, 
> right?
>
>> Does openmpi need to be built with either --enable-debug or 
>> --enable-mem-debug? The "configure --help" says the former is not for

>> general MPI users. Unclear about the latter.
>
> No, both of those should be just for OMPI developers; you should not 
> need them for user installations.  Indeed, OMPI should build itself 
> with -g as relevant for TV support (i.e., use -g to compile the 
> relevant .c files in libmpi); you shouldn't need to build OMPI itself 
> with -g.
>
> --
> Jeff Squyres
> Cisco Systems
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Correction to FAQ: How do I build BLACS with Open MPI?

2007-07-12 Thread Michael

On Jul 12, 2007, at 4:42 PM, George Bosilca wrote:


The INTFACE is for the namespace interface in order to allow the
Fortran code to call a C function. So it should be dependent on the
compiler. Btw, for some reasons I was quite sure we generate all 4
versions of the Fortran interface ... If this is true is doesn't
really mater what you have in the INTFACE.


It would except this flag not only affects the names BLACS uses to  
link to OpenMPI but also what interfaces it generates (based on my  
experience), which then for example affects what happens when you  
build ScalaPack.  I believe that was what I was seeing when building  
those three with the Intel compiler and g95, the latter was harder  
then expected.



The option Jeff is refering to is the TRANSCOMM define. It allow
BLACS to know how to convert between Fortran and C handlers. For Open
MPI this should be set to -DUseMpi2.


Fortunately documented on the web FAQ but not in the BLACS  
documentation.


Michael


   Thanks,
 george.

On Jul 12, 2007, at 2:41 PM, Jeff Squyres wrote:


On Jul 12, 2007, at 2:28 PM, Michael wrote:


In the FAQ , section
labeled:

12. How do I build BLACS with Open MPI?

INTFACE = -Df77IsF2C

That INTFACE value is only for G77, G95, and related compilers.



For the Intel Fortran compiler it is: -DAdd_


Really?  I always thought that this flag discussed how to convert F77
MPI handles to C handles (some MPI implementations use integers for
MPI handles in C, so there's no conversion necessary, but LAM and
Open MPI use pointers, so using the MPI_*_f2c() functions are
necessary).  Hence, it's not specific to a given fortran compiler.

But I could be completely misunderstanding this value...

UTK: can you confirm/deny both of these values?  (I do not claim to
be a BLACS expert...)


I have successfully built the combination of OpenMPI 1.2.3, ATLAS,
BLACS, ScalaPack, and MUMPS using the Intel Fortran compiler on two
different Debian Linux systems (3.0r3 on AMD Opterons and 4.0r0 on
Intel Woodcrest/MacPro).

Michael

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users