Try and "ls -l /home/alex/huji/ompi4/lib/libopen-pal.so.0" and "nm
/home/alex/huji/ompi4/lib/libopen-pal.so.0" to ensure that the file is >0
length and that it contains valid symbols.
If it doesn't look back at the make output and ensure there wasn't some
warning/non-fatal error in creating /ho
Hi Jeff,
Thank you very much for your help!
I tried to run the same test of ring_c from standard examples in
Open MPI 1.4.3 distribution. If I ran as you described from the
command line, it worked without any problem with sm btl
included(with --mca btl self,sm,openib). However, if I use sm
bt
On Feb 13, 2012, at 1:28 PM, Jeff Squyres wrote:
> You might want to fully uninstall the disto-installed version of Open MPI on
> all the nodes (e.g., Red Hat may have installed a different version of Open
> MPI, and that version is being found in your $PATH before your
> custom-installedversi
You might want to fully uninstall the disto-installed version of Open MPI on
all the nodes (e.g., Red Hat may have installed a different version of Open
MPI, and that version is being found in your $PATH before your
custom-installedversion).
On Feb 13, 2012, at 12:12 PM, Richard Bardwell wrote
OK, 1.4.4 is happily installed on both machines. But, I now get a really
weird error when running on the 2 nodes. I get
Error: unknown option "--daemonize"
even though I am just running with -np 2 -hostfile test.hst
The program runs fine on 2 cores if running locally on each node.
Any ideas ??
On Feb 13, 2012, at 11:02 AM, Richard Bardwell wrote:
> Ralph
>
> I had done a make clean in the 1.2.8 directory if that is what you meant ?
> Or do I need to do something else ?
>
> I appreciate your help on this by the way ;-)
Hi Richard
You can install in a different directory, totally s
My mistake Ralph, should have done a make uninstall instead !
Thanks
Richard
- Original Message -
From: Ralph Castain
To: Open MPI Users
Sent: Monday, February 13, 2012 3:41 PM
Subject: Re: [OMPI users] MPI orte_init fails on remote nodes
You need to clean out the old att
Try a "make uninstall" from the OMPI 1.2.8 source directory.
The reason is that "make install" from OMPI 1.4.x won't uninstall the prior
OMPI -- it'll just overwrite it. But some plugins from 1.2.8 will still be
left, and confuse the OMPI 1.4 install.
On Feb 13, 2012, at 11:02 AM, Richard Bar
Ralph
I had done a make clean in the 1.2.8 directory if that is what you meant ?
Or do I need to do something else ?
I appreciate your help on this by the way ;-)
- Original Message -
From: Ralph Castain
To: Open MPI Users
Sent: Monday, February 13, 2012 3:41 PM
Subject: Re
You need to clean out the old attempt - that is a stale file
Sent from my iPad
On Feb 13, 2012, at 7:36 AM, "Richard Bardwell" wrote:
> OK, I installed 1.4.4, rebuilt the exec and guess what .. I now get some
> weird errors as below:
> mca: base: component_find: unable to open
> /usr/loca
OK, I installed 1.4.4, rebuilt the exec and guess what .. I now get some
weird errors as below:
mca: base: component_find: unable to open
/usr/local/lib/openmpi/mca_ras_dash_host
along with a few other files
even though the .so / .la files are all there !
- Original Message -
Fro
Good heavens - where did you find something that old? Can you use a more recent
version?
Sent from my iPad
On Feb 13, 2012, at 4:45 AM, "Richard Bardwell" wrote:
> Gentlemen
>
> I am struggling to get MPI working when the hostfile contains different nodes.
>
> I get the error below. Any idea
Gentlemen
I am struggling to get MPI working when the hostfile contains different nodes.
I get the error below. Any ideas ?? I can ssh without password between the two
nodes. I am running 1.2.8 MPI on both machines.
Any help most appreciated !
MPITEST/v8_mpi_test> mpiexec -n 2 --debug-da
13 matches
Mail list logo