Thank you for all the replies.
Here's what I have now.
I modified my .bash_profile on my server to include the path of my executables,
and now mpiexec and mpicc both point to the correct ones. I tried setting the
LD_LIBRARY_PATH too, but it didn't seem to work, as it kept telling me it
couldn'
Hello Daniel and list
Could it be a problem with memory bandwidth / contention in multi-core?
It has been reported in many mailing lists (mpich, beowulf, etc).
Here it seems to happen in dual-processor dual-core with our memory
intensive programs.
Have you checked what happens to the shared me
On Wed, 13 Aug 2008, George Bosilca wrote:
> Daniel,
>
> Open IB is one of the few devices that allow local communications (instead of
> using shared memory). As the latency looks OK, I supposed that small messages
> always use shared memory, while large ones get stripped over sm and openib.
>
Daniel,
Open IB is one of the few devices that allow local communications
(instead of using shared memory). As the latency looks OK, I supposed
that small messages always use shared memory, while large ones get
stripped over sm and openib. Can you run a test without openib to
confirm this
Hello,
I'm troubleshooting a weird benchmark situation that having the sm btl
enabled gives me worse results than disabling it.
For example, this on a single compute node with 2*Xeon5420, 8 GB RAM and a
ConnectX gen2 IB card, with OFED 1.3 and OpenMPI 1.2.6 as software setup:
[cvsupport@extern
Hi,
Thanks for the prompt reply. This might be basic but typically where is the
32 bit ofed libs? I think the default install prefix is /usr and my guess is
the 64 bit libs is in /usr/lib64 . Where do I look for the 32 bit ofed libs?
I remembered during the ofed build that passing 32 bit build ar
You probably need to add --with-openib-libdir=/path/to/your/32/bit/
ofed/libs. I'm guessing that the system installed the 64 bit libs in
the default location and the 32 bit libs in a different location. If
that's the case, then --with-openib-libdir will tell OMPI specifically
where to look
Hi,
I've been trying to install openmpi 1.2.5 on my cluster system running RHEL
4 (x64) with OFED 1.3. I need openmpi 1.2.5 (32 bit) and OFED seems to only
install 64 bit version. I tried to build OFED with 32 bit support but it
failed so I figure it's best to just compile 32 bit openmpi. I follo
Hi Rayne Lance and list
I second Lenny's suggestion.
The easiest way to get started is to use full paths to mpiexec when you
run your program,
and to the mpi compiler wrappers (mpicc, etc) when you compile it.
If you don't use the mpi compiler wrappers, make sure your Makefile
points to the
c
Hi,
I am getting a curious error on a simple communications test. I have altered
the std mvapich osu_latency test to accept receives from any source and I get
the following error
[d013.sc.net:15455] *** An error occurred in MPI_Recv
[d013.sc.net:15455] *** on communicator MPI_COMM_WORLD
[d013.s
you can also provide a full path to your mpi
#/usr/lib/openmpi/1.2.5-gcc/bin/mpiexec -n 2 ./a.out
On 8/12/08, jody wrote:
>
> No.
> The PATH variable simply tells the system in which order the
> directories should be searched for executables.
>
> so in .bash_profile just add the line
> PATH=/u
No.
The PATH variable simply tells the system in which order the
directories should be searched for executables.
so in .bash_profile just add the line
PATH=/usr/lib/openmpi/1.2.5-gcc/bin:$PATH
after the line
PATH=$PATH:$HOME/bin
Then the system will search in /usr/lib/openmpi/1.2.5-gcc/bin be
My .bash_profile and .bashrc on the server are exactly the same as that on my
PC. However, I can run mpiexec without any problems just using my PC as a
single node, i.e. without trying to login to other servers and using multiple
nodes. I only get the errors on the server.
In .bash_profile, I
What are the contents of your $PATH environment variable?
Make sure that your Open-MPI folder (/usr/lib/openmpi/1.2.5-gcc/bin)
precedes '/usr/bin' in $PATH,
i.e.
/usr/lib/openmpi/1.2.5-gcc/bin:/usr/bin
then the Open-MPI version of mpirun or mpiexec will be used instead of
the LAM-versions.
This s
Hi,
I looked for any folders with 'lam', and found 2, under /usr/lib/lam and
/etc/lam. I don't know if it means LAM was previously installed, because my PC
also has /usr/lib/lam, although the contents are different. I renamed the 2
folders, and got the "*** Oops -- I cannot open the LAM help fi
Hi Ryan
Another thing:
Have you checked if the mpiexec you call is really the one from your
Open-MPI installation?
Try 'which mpiexec' to find out.
Jody
On Tue, Aug 12, 2008 at 9:36 AM, jody wrote:
> Hi Ryan
>
> The message "Lamnodes Failed!" seems to indicate that you still have a
> LAM/MPI in
Hi Ryan
The message "Lamnodes Failed!" seems to indicate that you still have a
LAM/MPI installation somewhere.
You should get rid of that first.
Jody
On Tue, Aug 12, 2008 at 9:00 AM, Rayne wrote:
> Hi, thanks for your reply.
>
> I did what you said, set up the password-less ssh, nfs etc, and pu
Hi, thanks for your reply.
I did what you said, set up the password-less ssh, nfs etc, and put the IP
address of the server in the default hostfile (in my PC only, the default
hostfile in the server does not contain any IP addresses). Then I installed
Open MPI in the server under the same dire
18 matches
Mail list logo