Hi.
Anyone had performed Gromacs(v3.x - v3.3.3) with FFTW(v3.1) debugging using
OpenMPI(v-1.2.5 or v-1.2.6). I had properly configured OpenMPI with debug
option and also configured Gromacs alongwith FFTW succesfully. I can perform
p2bdbx_mpi, editconf_mpi, genbox_mpi, grompp_mpi compilation and fi
Can you double check that you are using the wrapper compilers from the
Open MPI installation that you think you're using?
Tiger doesn't ship with Open MPI, so you shouldn't have problems
conflicting with a system-installed OMPI, but could you possibly have
another OMPI install hanging aroun
Hi Graham,
Have you tried running without the btl_tcp_if_include line in the .conf
file? Open MPI is usually smart enough to auto detect and choose the
correct interfaces.
Hope this helps,
Tim
Graham Jenkins wrote:
We're moving from using a single (eth0) interface on our execute nodes
to u
OpenMPI does not do this. MPI codes are regular C and Fortran
programs, so if they ALLOCATE or malloc() memory, and linux/mac can
give it it will.
I think what you need is a batch system (Torque Plug goes here). We
use a batch system that will then place processes on nodes based on
memo
Hi everyone
Hi, am a beginner in openmpi, does openmpi provide a function for allocating
memory to a process. For example, I have a rendering process from paraview
named pvserver and I would like to allocate a certain amount of memory for
that process across a few nodes specified in the hostfile..
hi thanks..everyone, it was very helpful.. I have another question too, but
I will post it as a different topic
On 4/17/08, Mark Kosmowski wrote:
>
> Cally:
>
> In the hostfile you add a "slots" line. For example on my dual
> opteron (single core) system, I have slots=2. This can be read about
On Thu, Apr 17, 2008 at 6:36 AM, Terry Frankcombe wrote:
> Given that discussion, might I suggest an (untested) workaround would be
> to --prefix OpenMPI into a non-standard location?
It is possible approach, but there are others - it is also possible to
provide specific CMake variable value on
Dear OMPI users and builders:
I recently installed the latest version of Open MPI (1.2.6) on my Mac
Pro, which has 2 dual-core Intel cpu's.
On the plus side, I can successfully compile and run MPI codes written
both in fortran77 and in c on all 4 cores
with the corresponding wrappers that the
Given that discussion, might I suggest an (untested) workaround would be
to --prefix OpenMPI into a non-standard location?
On Wed, 2008-04-16 at 13:03 -0400, Jeff Squyres wrote:
> On Apr 16, 2008, at 9:38 AM, Crni Gorac wrote:
> >> mpicc (and friends) typically do not output -I only for "special