Hi
You also must make sure that all slaves can
connect via ssh to each other and to the master
node without ssh.
Jody
On Wed, Dec 21, 2011 at 3:57 AM, Shaandar Nyamtulga wrote:
> Can you clarify your answer please.
> I have one master node and other slave nodes. I created rsa key on my master
Hello
I've coded an mpi programm who run on 9 process. 7 process are in one
node, 2 other on another node.
It works really good, but sometimes it's kind like the 8th process block
for 10ms.
Here is the scheme of the error
Normal execution 1000 ms | 10ms BLOCK | 60 ms execution | 10 ms BLOC
I've just completed installing OpenSuSE 12.1 on a standalone node.
Using the bundled GCC and OpenMPI the user code fails. I've reduced the
problem to that below, but without the source I'm not sure what
orte_init is after. Using mpirun -np 1 or -np 2 both fail in the same
way. Any suggestions?
Your OMPI install looks busted. Can you send all the info listed in the
"support" section on the OMPI web site?
Sent from my phone. No type good.
On Dec 21, 2011, at 7:37 AM, "Rushton Martin" wrote:
> I've just completed installing OpenSuSE 12.1 on a standalone node.
> Using the bundled GCC a
For run-time problems:
1.Check the FAQ first. DONE. nothing obvious.
2.The version of Open MPI that you're using.openmpi-1.4.3
3.The config.log file Installed as binary
4.The output of the "ompi_info --all" See enclosed gzip
5.If r
How strange - it looks like you are missing a bunch of libraries. What do you
get if you do
ls -R /usr/lib64/mpi/gcc/openmpi/lib64
On Dec 21, 2011, at 6:26 AM, Rushton Martin wrote:
> For run-time problems:
>
> 1.Check the FAQ first.DONE. nothing obvious.
> 2
I agree it looks like it, but YaST ought to be able to work out
dependencies. Anyhow, here is the listing:
isis:/usr/lib64/mpi/gcc/openmpi # ls -R lib64
lib64:
libmca_common_sm.so.1 libmpi_f77.so.0 libopen-rte.so.0
libmca_common_sm.so.1.0.0 libmpi_f77.so.0.0.1 libopen-rte.so.0.0.0
l
Yeah, you are missing all the component libraries - there should be a couple of
subdirectories under there. I would suggest either reinstalling or grabbing a
tarball and installing by hand.
On Dec 21, 2011, at 8:13 AM, Rushton Martin wrote:
> I agree it looks like it, but YaST ought to be able
OK, this is starting to look like a SuSE problem I've downgraded the
1.4.3 version to 1.2.8 from SuSE 11.4. There are no subdirectories
under .../openmpi/lib64. I checked the file list in YaST, but that may
be reflecting the disk rather than the RPM.
I ripped all of OpenMPI out and reinstalled
We don't really have any contacts at SUSE. Can you query to fid out who te
upstream provider is and send them a note that their OMPI package is broken?
Sent from my phone. No type good.
On Dec 21, 2011, at 11:15 AM, "Rushton Martin" wrote:
> OK, this is starting to look like a SuSE problem I
I'll feed back to them tomorrow. The 1.3 OpenMPI tarball configured and
made OK. The "noddy" test case runs, I'll see what the users say later.
Thanks all for you assistance.
Martin Rushton
HPC System Manager, Weapons Technologies
Tel: 01959 514777, Mobile: 07939 219057
email: jmrush...@qineti
Dear OMPI Users,
I have just read the messages from Martin Rushton and Jeff
Squyres and have been having the same problem trying to get openmp-1.4.4 to
work. My specs are below:
Xeon(R) CPU 5335 2.00 GHz
Linux SUSE 11.4 (x86_64)
Did you remember to set your LD_LIBRARY_PATH to include /opt/openmpi, per your
configure line?
On Dec 21, 2011, at 11:56 AM, amosl...@gmail.com wrote:
> Dear OMPI Users,
> I have just read the messages from Martin Rushton and Jeff Squyres
> and have been having the same problem tryin
Is the path to your opempi libraries in your LD_LIBRARY_PATH?
--
Prentice
On 12/21/2011 01:56 PM, amosl...@gmail.com wrote:
> Dear OMPI Users,
> I have just read the messages from Martin Rushton and Jeff
> Squyres and have been having the same problem trying to get
> openmp-1.4.4 to wo
You probably need also to launch the program with mpiexec (mpiexec -np 4
./hello_c),
not just ./hello_c as your email suggests you did.
On Dec 21, 2011, at 2:12 PM, Ralph Castain wrote:
> Did you remember to set your LD_LIBRARY_PATH to include /opt/openmpi, per
> your configure line?
>
>
> On
Not really - we support singletons, so that should work. The key is to have
LD_LIBRARY_PATH set correctly in the environment.
On Dec 21, 2011, at 1:08 PM, Gustavo Correa wrote:
> You probably need also to launch the program with mpiexec (mpiexec -np 4
> ./hello_c),
> not just ./hello_c as your
Hi Ralph,
Yes I did add to the LD_LIBRARY_PATH in the .bashrc file.
Amos L.
On Wed, Dec 21, 2011 at 2:12 PM, Ralph Castain wrote:
> Did you remember to set your LD_LIBRARY_PATH to include /opt/openmpi, per
> your configure line?
>
>
> On Dec 21, 2011, at 11:56 AM, amosl...@gmail.c
Hi
I put the /opt/openmpi.org in the bashrc file.
Amos
On Wed, Dec 21, 2011 at 2:12 PM, Ralph Castain wrote:
> Did you remember to set your LD_LIBRARY_PATH to include /opt/openmpi, per
> your configure line?
>
>
> On Dec 21, 2011,
When I pushed "Send" on this email
I thought immediately: " ... hmm, Ralph or Jeff will say this is wrong ..."
Wow! Support to singletons!
I haven't read this word since long forgotten readings in Set Theory.
So, if you run a single process, you can do away with mpiexec,
and pretend that the co
Hi,
Attached is the output I got from using mpiexec. Amos
On Wed, Dec 21, 2011 at 5:17 PM, Gustavo Correa wrote:
> When I pushed "Send" on this email
> I thought immediately: " ... hmm, Ralph or Jeff will say this is wrong ..."
>
> Wow! Support to singletons!
> I haven't read this word sin
On Dec 21, 2011, at 3:09 PM, amosl...@gmail.com wrote:
> Hi Ralph,
> Yes I did add to the LD_LIBRARY_PATH in the .bashrc file.
That is fine, but did you source that .bashrc so it was in your current
environment? It doesn't matter what is in the .bashrc file - what matters is
the
21 matches
Mail list logo