We have an installation with both Mellanox and Qlogic IB adaptors (in
distinct islands), so I built open-mpi 1.4.3 with openib and psm
support.
Now I've just read this in the OFED source, but I can't see any relevant
issue in the open-mpi tracker:
OpenMPI support
---
It is recom
Hello,
I came across what appears to be an error in implementation of
MPI_scatterv Fortran-90 version. I am using OpenMPI 1.4.3 on Linux.
This comes up when OpenMPI was configured with
--with-mpi-f90-size=medium or --with-mpi-f90-size=large
The standard specifies that the interface is
MPI_SCATTE
I do believe you found a bona-fide bug.
Could you try the attached patch? (I think it should only affect f90 "large"
builds) You should be able to check it quickly via:
cd top_of_ompi_source_tree
patch -p0 < scatterv-f90.patch
cd ompi/mpi/f90
make clean
rm mpi_scatterv_f90.f90
make all install
Hi,
I am trying to get rid of the following error message when I use mpirun.
mca: base: component_find: "mca_ess_portals_utcp" does not appear to be a valid
ess MCA dynamic component (ignored):
/usr/local/lib/openmpi/mca_ess_portals_utcp.so: undefined symbol:
mca_ess_portals_utcp_component
I am
Sure - instead of what you did, just add --without-portals to your original
configure. The exact option depends on what portals you have installed.
Here is the relevant part of the "./configure -h" output:
--with-portals=DIR Specify the installation directory of PORTALS
--with-portals-l
Over IB, I'm not sure there is much of a drawback. It might be slightly slower
to establish QP's, but I don't think that matters much.
Over iWARP, rdmacm can cause connection storms as you scale to thousands of MPI
processes.
On Apr 20, 2011, at 5:03 PM, Brock Palen wrote:
> We managed to ha
Does it vary exactly according to your receive_queues specification?
On Apr 19, 2011, at 9:03 AM, Eloi Gaudry wrote:
> hello,
>
> i would like to get your input on this:
> when launching a parallel computation on 128 nodes using openib and the "-mca
> btl_openib_receive_queues P,65536,256,192,1
I believe it was mainly a startup issue -- there's a complicated sequence of
events that happens during MPI_INIT. IIRC, the issue was that if OMPI had
software support for PSM, it assumed that the lack of PSM hardware was
effectively an error.
v1.5 made the startup sequence a little more flexi
On Apr 20, 2011, at 10:44 AM, Ormiston, Scott J. wrote:
> I originally thought the configure was fine, but now tht I check through the
> config.log, I see that it had errors:
>
> conftest.c(49): error #2379: cannot open source file "ac_nonexistent.h"
> #include
It's normal and expected for th
Given that part of our cluster is TCP only, openib wouldn't even startup on
those hosts and this would be ignored on hosts with IB adaptors?
Just checking thanks!
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Apr 21, 2011, at 6:21 PM, Jeff
On Apr 21, 2011, at 4:41 PM, Brock Palen wrote:
> Given that part of our cluster is TCP only, openib wouldn't even startup on
> those hosts
That is correct - it would have no impact on those hosts
> and this would be ignored on hosts with IB adaptors?
Ummm...not sure I understand this one.
Dear all,
I am a beginner of MPI, right now I try to use MPI_GATHERV in my code, the test
code just gather the value of array A to store them in array B, but I found an
error listed as follows,
'Fatal error in MPI_Gatherv: Invalid count, error stack:
PMPI_Gatherv<398>: MPI_Gatherv failed
fail
12 matches
Mail list logo