On Jun 28, 2012, at 8:37 PM, David Warren wrote:
> You should not have to recompile openmpi, but you do have to use the correct
> type. You can check the size of integers in your fortrana nd use MPI_INTEGER4
> or MPI_INTEGER8 depending on what you get.
If you configure ompi with -fdefault-integ
On Jun 28, 2012, at 8:04 PM, Yong Qin wrote:
> Thanks to Jeff, we now have a bug registered with the segv issue.
There may be some confusion here with the fact that OMPI supports 2 different
MX transports: an MTL and a BTL. Here's what the README says:
-
- Myrinet MX (and Open-MX) support
Yes, PSM is the native transport for InfiniPath. It is faster than the
InfiniBand verbs support on the same hardware.
What version of Open MPI are you using?
On Jun 28, 2012, at 10:03 PM, Sébastien Boisvert wrote:
> Hello,
>
> I am getting random crashes (segmentation faults) on a super comp
I am using Open-MPI 1.4.3 compiled with gcc 4.5.3.
The library:
/usr/lib64/libpsm_infinipath.so.1.14: ELF 64-bit LSB shared object, AMD
x86-64, version 1 (SYSV), not stripped
Jeff Squyres a écrit :
Yes, PSM is the native transport for InfiniPath. It is faster than the
InfiniBand verbs su
The Open MPI 1.4 series is now deprecated. Can you upgrade to Open MPI 1.6?
On Jun 29, 2012, at 9:02 AM, Sébastien Boisvert wrote:
> I am using Open-MPI 1.4.3 compiled with gcc 4.5.3.
>
> The library:
>
> /usr/lib64/libpsm_infinipath.so.1.14: ELF 64-bit LSB shared object, AMD
> x86-64, versi
Hi,
Recompiling OpenMPI with
./configure FCFLAGS=-fdefault-integer-8 FFLAGS=-fdefault-integer-8 \
--with-wrapper-fflags=-fdefault-integer-8 \
--with-wrapper-fcflags=-fdefault-integer-8
is the easy way to go (at least for me). Changes to the Fortran code are
minimal. Be aware that the und
Hi,
Thank you for the direction.
I installed Open-MPI 1.6 and the program is also crashing with 1.6.
Could there be a bug in my code ?
I don't see how disabling PSM would make the bug go away if the bug
is in my code.
Open-MPI configure command
module load gcc/4.5.3
./configure \
--prefix=
Hi Sebastien,
The Infinipath / PSM software that was developed by PathScale/QLogic is now
part of Intel.
I'll advise you off-list about how to contact our customer support so we can
gather information about your software installation and work to resolve your
issue.
The 20 microseconds latency
Thanks Jeff for the doc. However I'm not sure if I understand your
following comment correctly. If I remove the MX BTL plugins, a.k.a.,
mca_btl_mx.la and mca_btl_mx.so, I'm now getting errors of these
components not found.
[n0026.hbar:09467] mca: base: component_find: unable to open
.../mca_btl_mx
On Jun 29, 2012, at 2:14 PM, Yong Qin wrote:
> Thanks Jeff for the doc. However I'm not sure if I understand your
> following comment correctly. If I remove the MX BTL plugins, a.k.a.,
> mca_btl_mx.la and mca_btl_mx.so, I'm now getting errors of these
> components not found.
>
> [n0026.hbar:09467
Hello,
The latency of 20 microseconds is for 4000-byte messages
going from MPI rank A to MPI rank B and then back to MPI rank A.
For a one-way trip, it is 10 microseconds.
And the latency for 1-byte messages
from MPI rank A to MPI rank B is already below 3 microseconds.
I will contact you off
I'm confused now :).
I thought that's what I did, removing "mca_btl_mx.la" and
"mca_btl_mx.so". This is the MX BTL plugin, right?
On Fri, Jun 29, 2012 at 11:16 AM, Jeff Squyres wrote:
> On Jun 29, 2012, at 2:14 PM, Yong Qin wrote:
>
>> Thanks Jeff for the doc. However I'm not sure if I understan
Apparently, I can't read -- I'm sorry; you did say exactly the right thing and
my eyes parsed it wrong.
Yes, you did the right thing by removing mca_btl_mx.*.
I can't imagine why you're getting those errors -- there should be nothing else
in OMPI that refers to mca_btl_mx.*.
*** George -- an
My concern is how do the C side know fortran integer using 8 bytes?
My valgrind check show something like:
==8482== Invalid read of size 8
==8482==at 0x5F4A50E: ompi_op_base_minloc_2integer (op_base_functions.c:631)
==8482==by 0xBF70DD1: ompi_coll_tuned_allreduce_intra_recursivedoubling
14 matches
Mail list logo