Ok, I finally was able to get on and run some ofed tests - it looks to
me like I must have something configured wrong with the qlogic cards,
but I have no idea what???
Mellanox to Qlogic:
ibv_rc_pingpong n15
local address: LID 0x0006, QPN 0x240049, PSN 0x87f83a, GID ::
remote address: LID
Great - thanks!
On Jul 27, 2011, at 12:16 PM, Justin Wood wrote:
> I heard back from my Altair contact this morning. He told me that they did
> in fact make a change in some version of 10.x that broke this. They don't
> have a workaround for v10, but he said it was fixed in v11.x.
>
> I buil
I heard back from my Altair contact this morning. He told me that they
did in fact make a change in some version of 10.x that broke this. They
don't have a workaround for v10, but he said it was fixed in v11.x.
I built OpenMPI 1.5.3 this morning with PBSPro v11.0, and it works fine.
I don't
Am 27.07.2011 um 19:43 schrieb Lane, William:
> Thank you for your help Ralph and Reuti,
>
> The problem turned out to be the number of file descriptors was insufficient.
>
> The reason given by a sys admin was that since SGE isn't a user it wasn't
> initially using the new
> upper bound on the
Thank you for your help Ralph and Reuti,
The problem turned out to be the number of file descriptors was insufficient.
The reason given by a sys admin was that since SGE isn't a user it wasn't
initially using the new
upper bound on the number of file descriptors.
-Bill Lane
For the benefit of people running into similar problems and ending up
reading this thread, we finally found a solution.
One can use the mpi function MPI_TYPE_CREATE_HINDEXED to create an mpi
data type with 32-bit local variable count and 64-bit offsets, which
will work good enough for us for t
Sorry to bring this back up.
We recently had an outage updated the firmware on our GD4700 and installed a
new mellonox provided OFED stack and the problem has returned.
Specifically I am able to produce the problem with IMB 4 12 core nodes when it
tries to go to 16 cores. I have verified that en
Hi,
For what it's worth: we're successfully running OMPI 1.4.3 compiled with
gcc-4.1.2 along with PBS Pro 10.4.
Kind regards,
Youri LACAN-BARTLEY
-Message d'origine-
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] De la part
de Ralph Castain
Envoyé : mercredi 27 ju