Hi Josh,
The ring_c example does not work on our login node :
[mboisson@helios-login1 examples]$ mpiexec -np 10 ring_c
[mboisson@helios-login1 examples]$ echo $?
65
[mboisson@helios-login1 examples]$ echo $LD_LIBRARY_PATH
/software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib:/usr/lib64/nvidia:
Hi,
I solved the warning that appeared with OpenMPI 1.6.5 on the login node.
I increased the registrable memory.
Now, with OpenMPI 1.6.5, it does not give any warning. Yet, with OpenMPI
1.8.1 and OpenMPI 1.8.2rc4, it still exits with error code 65 and does
not produce the normal output.
I w
Hi ReutiYes, my installation of Open MPI is SGE-aware. I got the
following[oscar@compute-1-2 ~]$ ompi_info | grep grid MCA ras:
gridengine (MCA v2.0, API v2.0, Component v1.6.2)I'm a bit slow and I didn't
understand the las part of your message. So i made a test trying to solve m
Hi,
Am 15.08.2014 um 19:56 schrieb Oscar Mojica:
> Yes, my installation of Open MPI is SGE-aware. I got the following
>
> [oscar@compute-1-2 ~]$ ompi_info | grep grid
> MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.2)
Fine.
> I'm a bit slow and I didn't understand t
Here are the requested files.
In the archive, you will find the output of configure, make, make
install as well as the config.log, the environment when running ring_c
and the ompi_info --all.
Just for a reminder, the ring_c example compiled and ran, but produced
no output when running and ex
But OMPI 1.8.x does run the ring_c program successfully on your compute
node, right? The error only happens on the front-end login node if I
understood you correctly.
Josh
On Fri, Aug 15, 2014 at 5:20 PM, Maxime Boissonneault <
maxime.boissonnea...@calculquebec.ca> wrote:
> Here are the reques
Correct.
Can it be because torque (pbs_mom) is not running on the head node and
mpiexec attempts to contact it ?
Maxime
Le 2014-08-15 17:31, Joshua Ladd a écrit :
But OMPI 1.8.x does run the ring_c program successfully on your
compute node, right? The error only happens on the front-end logi
On Aug 15, 2014, at 5:39 PM, Maxime Boissonneault
wrote:
> Correct.
>
> Can it be because torque (pbs_mom) is not running on the head node and
> mpiexec attempts to contact it ?
Not for Open MPI's mpiexec, no.
Open MPI's mpiexec (mpirun -- they're the same to us) will only try to use TM
stu
Hi Jeff,
Le 2014-08-15 17:50, Jeff Squyres (jsquyres) a écrit :
On Aug 15, 2014, at 5:39 PM, Maxime Boissonneault
wrote:
Correct.
Can it be because torque (pbs_mom) is not running on the head node and mpiexec
attempts to contact it ?
Not for Open MPI's mpiexec, no.
Open MPI's mpiexec (mp