Hi,
okay lets reboot, even though Gilles last mail was onto something.
The problem is that i failed starting programs with mpirun when more
than one node was involved. I mentioned that it is likely some
configuration problem with my server, especially authentification(we
have some kerberos ni
I’m pruning this email thread so I can actually read the blasted thing :-)
Guys: you are off in the wilderness chasing ghosts! Please stop.
When I say that Torque uses an “ordered” file, I am _not_ saying that all the
host entries of the same name have to be listed consecutively. I am saying tha
Oswin,
One more thing, can you
pbsdsh -v hostname
before invoking mpirun ?
Hopefully this should print the three hostnames
Then you can
ldd `which pbsdsh`
And see which libtorque.so is linked with it
Cheers,
Gilles
Oswin Krause wrote:
>Hi Gilles,
>
>There you go:
>
>[zbh251@a00551 ~]$ cat $
Oswin,
So it seems that Open MPI think it tm_spawn orted on the remote nodes, but
orted ends up running on the same node than mpirun.
On your compute nodes, can you
ldd /.../lib/openmpi/mca_plm_tm.so
And confirm it is linked with the same libtorque.so that was built/provided
with torque ?
Chec
Hi,
Thanks for all the hints. Only issue is: this is the file generated by
torque. Torque - or at least the torque 4.2 provided by my redhat
version - gives me an unordered file.
Should I rebuild torque?
Best,
Oswin
I am currently rebuilding the package with --enable-debug.
On 2016-09-08 09
Someone has done some work there since I last did, but I can see the issue.
Torque indeed always provides an ordered file - the only way you can get an
unordered one is for someone to edit it, and that is forbidden - i.e., you get
what you deserve because you are messing around with a system-def
Ralph,
there might be an issue within Open MPI.
on the cluster i used, hostname returns the FQDN, and $PBS_NODEFILE uses
the FQDN too.
my $PBS_NODEFILE has one line per task, and it is ordered
e.g.
n0.cluster
n0.cluster
n1.cluster
n1.cluster
in my torque script, i rewrote the machine
Hi,
You are right. Yes the library is there and is linking to libtorque.so.
Sorry for the confusion.
Is there any other information I can provide? I am seriously new to all
of this.
Best,
Oswin
On 2016-09-07 17:16, r...@open-mpi.org wrote:
You aren’t looking in the right place - there is
You aren’t looking in the right place - there is an “openmpi” directory
underneath that one, and the mca_xxx libraries are down there
> On Sep 7, 2016, at 7:43 AM, Oswin Krause
> wrote:
>
> Hi Gilles,
>
> I do not have this library. Maybe this helps already...
>
> libmca_common_sm.so libmpi
You can also run: ompi_info | grep 'plm: tm'
(note the quotes, because you need to include the space)
If you see a line listing the TM PLM plugin, then you have Torque / PBS support
built in to Open MPI. If you don't, then you don't. :-)
> On Sep 7, 2016, at 11:01 AM, Gilles Gouaillardet
>
I will double check the name.
If you did not configure with --disable-dlopen, then mpirun only links with
opal and orte.
At run time, these libs will dlopen the plugins (from the openmpi sub
directory, they are named mca_abc_xyz.so)
If you have support for tm, then one of the plugin will be linke
Hi Gilles,
I do not have this library. Maybe this helps already...
libmca_common_sm.so libmpi_mpifh.so libmpi_usempif08.so
libompitrace.so libopen-rte.so
libmpi_cxx.solibmpi.solibmpi_usempi_ignore_tkr.so
libopen-pal.so liboshmem.so
and mpirun does only link to
Hi,
Thanks for looking into it. Also thanks to rhc. I tried to be very
consistent with the naming after being asked to do so by our it
department.
[zbh251@a00551 ~]$ hostname
a00551.science.domain
[zbh251@a00551 ~]$ hostname -f
a00551.science.domain
this is afair the same name as given in th
Note the torque library will only show up if you configure'd with
--disable-dlopen. Otherwise, you can ldd /.../lib/openmpi/mca_plm_tm.so
Cheers,
Gilles
Bennet Fauber wrote:
>Oswin,
>
>Does the torque library show up if you run
>
>$ ldd mpirun
>
>That would indicate that Torque support is comp
The usual cause of this problem is that the nodename in the machinefile is
given as a00551, while Torque is assigning the node name as
a00551.science.domain. Thus, mpirun thinks those are two separate nodes and
winds up spawning an orted on its own node.
You might try ensuring that your machine
Thanjs for the ligs
>From what i see now, it looks like a00551 is running both mpirun and orted,
>though it should only run mpirun, and orted should run only on a00553
I will check the code and see what could be happening here
Btw, what is the output of
hostname
hostname -f
On a00551 ?
Out of
16 matches
Mail list logo