On Wed, 25 Jul 2007, Jeff Squyres wrote:
I'm still awaiting access to the Intel 10 compilers to try to
reproduce this problem myself. Sorry for the delay...
What do you need for this to happen? The intel packages? I can give you
access to a machine if you wan't to try it out.
Ricardo Rei
Hello,
I've downloaded the 1.2.3 version of openmpi and compiled it just as said
in the README.
Now I an trying to run ot on the University grid.
I've set the env. var OMPI_MCA_pls_rsh_agent to rsh.
When I'm running the followind cmd:
mpiexec -np 2 hostname
it runs fine.
But when I trying to
Sorry all,
This particular thread is going beyond the scope of this forum,
so we round it up asap.
Francesco,
My configurations are of course just a guidance and Intel compilers
may require different settings.
As you use Intel compilers you may (and probably) should
configure Amber to use MKL
Dear Andrey:
My intel compilation of openmpi is in use for other applications, too. And with
fortran, intel results in a much faster compilation, even with respect to amd
libraries. That was reported recently on this mailing list also from amber gb.
The quickest (and more efficient) solution in my
Hi all,
I have compiled Amber9 with OpenMPI 1.2 without a hitch.
Machine - an 8-way (16 cores, Opteron) SMP box (Tyan VX50),
running SLES 10 64-bit with AMD ACML optimised libraries.
Here is the configure script for Amber9:
# myconfigure ---
PWD=`pwd`
export AMBERHOME=`dirname $
On Wednesday 25 July 2007, Jeff Squyres wrote:
> On Jul 25, 2007, at 7:45 AM, Biagio Cosenza wrote:
> > Jeff, I did what you suggested
> >
> > However no noticeable changes seem to happen. Same peaks and same
> > latency times.
>
> Ok. This suggests that Nagle may not be the issue here.
My guess
On Jul 23, 2007, at 8:53 PM, Jeff Squyres wrote:
It looks like we enable Nagle right when TCP BTL connections are
made. Surprisingly, it looks like we don't have a run-time option to
turn it off for power-users like you who want to really tweak around.
I should note that I got the logic backw
On Jul 25, 2007, at 7:45 AM, Biagio Cosenza wrote:
Jeff, I did what you suggested
However no noticeable changes seem to happen. Same peaks and same
latency times.
Ok. This suggests that Nagle may not be the issue here. Is the code
tightly coupled? If so, this could be normal operating
FWIW, I see that "ifort" is being used to build Amber (instead of
using mpif77 or mpif90), and I don't see any reference to the MPI
libraries in the final link line to create the "sander.MPI"
executable, which is probably why it says it can't find all those
symbols.
We strongly recommend
Jeff, I did what you suggested
However no noticeable changes seem to happen. Same peaks and same latency
times.
Are you sure that for disabling the Nagle's algorithm is needed just
changing optval to 0?
I saw that, in btl_tcp_endpoint.c, the optval assignement is inside a
#if defined(TCP_NODELAY
I'm still awaiting access to the Intel 10 compilers to try to
reproduce this problem myself. Sorry for the delay...
On Jul 25, 2007, at 3:09 AM, Dirk Clasen wrote:
Hi,
I'm having trouble to install openmpi 1.2.3 on Linux ia32 using the
Intel 10.0.025 compilers.
There was a similar thread
Hi,
I'm having trouble to install openmpi 1.2.3 on Linux ia32 using the
Intel 10.0.025 compilers.
There was a similar thread before:
http://www.open-mpi.org/community/lists/users/2007/07/3570.php
but I can't install the em64t version to solve the problem ...
mpicc and all the other tools cras
12 matches
Mail list logo