Oh you are right.
Thanks.
Best
tomek
On Tue, Jul 9, 2013 at 2:44 PM, Jeff Squyres (jsquyres)
wrote:
> If you care, the issue is that it looks like Gromacs is using the MPI C++
> bindings. You therefore need to use the MPI C++ wrapper compiler, mpic++
> (vs. mpicc, which is the MPI
I used mpicc but when I switched in Makefile to mpic++ it compiled
without errors.
Thanks a lot!
Best,
tomek
On Tue, Jul 9, 2013 at 2:31 PM, Jeff Squyres (jsquyres)
wrote:
> I don't see all the info requested from that web page, but it looks like OMPI
> built the C++ bindings ok.
you need any other file I would be happy to provide.
Thanks a lot!
Best,
tomek
config_gromacs.log.bz2
Description: BZip2 compressed data
config_openmpi.log.bz2
Description: BZip2 compressed data
rsive] Error 1
I am using gcc 4.7.3
Any ideas or suggestions?
Thanks!
Best,
tomek
Is doing blocking communication in a separate thread better then
asynchronous progress?
(At least as a workaround until the proper implementation gets
improved)
At the moment, yes. OMPI's asynchronous progress is "loosely
tested" (at best).
OMPI's threading support is somewhat stable
Hi Jeff
I am using OMPI on a MacBook Pro (for the moment).
What's "extremely slowly", and what does your test program do?
For example, test programs of Zoltan (load balancing library from
sandia) never finish, while normally
they take a fraction of second to finish.
By "asynchronous progr
perhaps somebody has some experience
already.
Tomek
Hi
Now that I have complied my code with OpenMPI 1.3.3 here is a new
problem:
When asynchronous progress is enabled even a simplest test problems
run extremely slowly.
Is this a common issue?
Tomek
OK - I have fixed it by including -L/opt/openmpi/lib at the very
beginning of mpicc ... -L/opt/openmpi/lib -o app.exe the rest ...
But something is wrong with dyld anyhow.
On 19 Aug 2009, at 21:04, Jody Klymak wrote:
Hi Tomek,
I'm using 10.5.7, and just went through a painful process
Hi Jody
What is the result of "which mpicc" (or whatever you are using for
your compiling/linking? I'm pretty sure that's where the library
paths get set, and if you are calling /usr/bin/mpicc you will get
the wrong library paths in the executable.
Here you are:
$ which mpicc
/opt/ope
/opt/openmpi/bin path is properly set and ompi_info
does outputs the right info.
Any hints will be appreciated
Tomek
11 matches
Mail list logo