Unfortunately, we have a few apps which use LAM/MPI instead of OpenMPI (and
this is something I have NO control over). I have been making an effort to
try and convince those who handle such apps to move over to LAM/MPI as it is
(as I understand it) no longer supported and end-of-life. In fac
> On Feb 27, 2015, at 6:42 AM, Sasso, John (GE Power & Water, Non-GE)
> wrote:
>
> Unfortunately, we have a few apps which use LAM/MPI instead of OpenMPI (and
> this is something I have NO control over). I have been making an effort to
> try and convince those who handle such apps to move o
On 02/27/2015 09:40 AM, Ralph Castain wrote:
Yeah, any other recommendations I can give to convince the
powers-that-be that immediate sun-setting of LAM/MPI would be great.
Sometimes I feel like I am trying to fit a square peg in a round holeL
Other than the fact that LAM/MPI no longer is s
I am trying to run openmpi application on my cluster. But the mpirun
fails, simple hostname command gives this error
[pmdtest@hpc bin]$ mpirun --host compute-0-0 hostname
--
Sorry! You were supposed to get help about:
op
Hi Syed
This really sounds as a problem specific to Rocks Clusters,
not an issue with Open MPI.
A confusion related to mount points, and soft links used by Rocks.
I haven't used Rocks Clusters in a while,
and I don't remember the details anymore, so please take my
suggestions with a grain of sal
Hi Gus
Thanks for prompt response. Well judged, I compiled with /export/apps
prefix so that is most probably the reason. I'll check and update you.
Best wishes
Ahsan
On Fri, Feb 27, 2015 at 10:07 PM, Gus Correa wrote:
> Hi Syed
>
> This really sounds as a problem specific to Rocks Clusters,
> n
On Feb 27, 2015, at 9:42 AM, Sasso, John (GE Power & Water, Non-GE)
mailto:john1.sa...@ge.com>> wrote:
Unfortunately, we have a few apps which use LAM/MPI instead of OpenMPI (and
this is something I have NO control over).
Bummer!
I have been making an effortlonger supported and end-of-life.
Hi Syed Ahsan Ali
To avoid any leftovers and further confusion,
I suggest that you delete completely the old installation directory.
Then start fresh from the configure step with the prefix pointing to
--prefix=/share/apps/openmpi-1.8.4_gcc-4.9.2
I hope this helps,
Gus Correa
On 02/27/2015 12:1
Dear Gus
Thanks once again for suggestion. Yes I did that before installation
to new path. I am getting error now about some library
tstint2lm: error while loading shared libraries:
libmpi_usempif08.so.0: cannot open shared object file: No such file or
directory
While library is present
[pmdtest@
Oh sorry. That is related to application. I need to recompile
application too I guess.
On Fri, Feb 27, 2015 at 10:44 PM, Syed Ahsan Ali wrote:
> Dear Gus
>
> Thanks once again for suggestion. Yes I did that before installation
> to new path. I am getting error now about some library
> tstint2lm:
On 02/27/2015 11:14 AM, Jeff Squyres (jsquyres) wrote:
Well, perhaps it was time. We haven't changed anything about LAM/MPI in
...a decade? Now that the domain is gone, since I don't even have an
SVN checkout any more, I can't check when the last meaningful commit
was. I see Rob found a ROM
Hi Syed Ahsan Ali
On 02/27/2015 12:46 PM, Syed Ahsan Ali wrote:
Oh sorry. That is related to application. I need to recompile
application too I guess.
You surely do.
Also, make sure the environment, in particular PATH and LD_LIBRARY_PATH
is propagated to the compute nodes.
Not doing that is a
12 matches
Mail list logo