On 10/28/13 5:22 AM, "Luis Kornblueh" wrote:
>My question would be to all openmpi power users and developers, what
>would be required to get this properly running.
>
>In case it is required to give more information, please come back to me.
>Maybe the explanation what we do is insufficient.
Open
Hi,
Am 28.10.2013 um 14:58 schrieb Luigi Cavallo:
> we are facing problems with openmpi under sge on a cluster equipped with
> QLogic IB HCAs. Working off sge, openmpi works perfectly, we can dispatch
> the job as we want, no warning/error messages at all. If we do the same
> under sge, even
Few thoughts occur:
1. 1.4.3 is awfully old - I would recommend you update to at least the 1.6
series if you can. We don't actively support 1.4 any more, and I don't know
what the issues might have been with PSM that long ago
2. I see that you built LSF support for some reason, or there is a st
shared memory, unless you tell us not to use it
On Oct 28, 2013, at 7:00 AM, Luigi Cavallo wrote:
>
> Hi,
>
> maybe naive question. How do openmpi processses communicate on a single blade
> ? Through the network card or sort of shared memory ?
>
> Thanks,
> Luigi
>
> __
Hi,
maybe naive question. How do openmpi processses communicate on a single blade ?
Through the network card or sort of shared memory ?
Thanks,
Luigi
This message and its contents including attachments are intended solely for the
original recipient. If you a
Hi,
we are facing problems with openmpi under sge on a cluster equipped with QLogic
IB HCAs. Working off sge, openmpi works perfectly, we can dispatch the job as
we want, no warning/error messages at all. If we do the same under sge, even
the hello-world program crashes. The main issue is PS
Dear all,
we do have a problem using one-sided mpi communication with OpenMPI.
The scenario is the following. We do have a computational model using
exclusively p2p mpi commuication calls. This runs fine and fast on an
rather new cluster with FDR IB and Intel SandyBridge XEONS. We have
around 1