Re: [OMPI users] Linking MPI applications with pgi IPA

2009-06-22 Thread Jeff Squyres
Can you compile Open MPI with the same compiler options? On Jun 19, 2009, at 2:57 PM, Brock Palen wrote: When linking application that are being compiled and linked with the - Mipa=fast,inline option, the IPA stops with errors like this case with amber: The following function(s) are called, b

Re: [OMPI users] Linking MPI applications with pgi IPA

2009-06-22 Thread Brock Palen
I will have to try this latter, in theory if the library is compiled with the options it will have the needed information. ST (the makers of PGI) got back and I quote: "You can try -Mipa=fast,inline,safe" "I think for now, all libraries are safe. It is more a matter of how aggressively does

[OMPI users] Problem with GNU GFortran OpenMPI 1.3.0

2009-06-22 Thread Si Hammond
Hi everyone, Another little mystery for you all! I have a relatively small MPI Fortran code which I can compile successfully with Intel and PGI compiled OpenMPI 1.3.0 (and 1.2.5). No problems with this. On a separate machine I have a GNU Gfortran 4.3 OpenMPI 1.3.0 installation and I get

Re: [OMPI users] Problem with GNU GFortran OpenMPI 1.3.0

2009-06-22 Thread Jeff Squyres
This error message is indicating that MPI_Comm_f2c is being invoked before MPI_INIT. Can you see if that is happening? If it helps, you can run with the MCA parameter mpi_abort_print_stack set to 1. This will print the stack when we abort. Or run with the MCA parameter mpi_abort_delay se

[OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
For the app I am using, ORCA (a Quantum Chemistry program), when it was compiled using openMPI 1.2.8 and run under 1.2.8 with the following in the openmpi-mca-params.conf file: btl=self,openib the app ran fine with no traffic over my Ethernet network and all traffic over my Infiniband network. H

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Ralph Castain
Sounds very strange, indeed. You might want to check that your app is actually getting the MCA param that you think it is. Try adding: -mca mpi_show_mca_params file,env to your cmd line. This will cause rank=0 to output the MCA params it thinks were set via the default files and/or environme

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
Thanks for the advice. Unfortunately the command line is internally generated by the app and then invoked so I can't see it. But, it doesn't matter anyway. It seems the Ethernet utilization "problem" I thought I had does not exist. So, I'm still looking for why my app using 1.2.8 is 50% faster

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Jim Kress ORG
Is there an environment variable (or variables) I can set to do the equivalent? Jim On Mon, 2009-06-22 at 19:40 -0600, Ralph Castain wrote: > Sounds very strange, indeed. You might want to check that your app is > actually getting the MCA param that you think it is. Try adding: > > -mca mpi_

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-22 Thread Don Kerr
On 06/22/09 22:36, Jim Kress ORG wrote: Is there an environment variable (or variables) I can set to do the equivalent? OMPI_MCA_mpi_show_mca_params see: http://www.open-mpi.org/faq/?category=tuning#setting-mca-params Jim On Mon, 2009-06-22 at 19:40 -0600, Ralph Castain wrote: Sounds