On May 15, 2009, at 1:23 AM, Silviu Groza wrote:
I still not solved these errors.
I need help in order to install Dalton quantum with OpenMPI.
Thank you.
---> Linking sequential dalton.x ...
mpif77.openmpi -march=x86-64 -O3 -ffast-math -fexpensive-
optimizations -funroll-loops -fno
Hah; this is probably at least tangentially related to
http://www.open-mpi.org/faq/?category=building#pathscale-broken-with-mpi-c++-api
I'll be kind and say that Pathscale has been "unwilling to help on
these kinds of issues" with me in the past as well. :-)
It's not entirely clear fro
Simone Pellegrini wrote:
sorry for the delay but I did some additional experiments to found out
whether the problem was openmpi or gcc!
The program just hangs... and never terminates! I am running on a SMP
machine with 32 cores, actually it is a Sun Fire X4600 X2. (8
quad-core Barcelona AMD
François PELLEGRINI wrote:
users-requ...@open-mpi.org wrote:
Date: Thu, 14 May 2009 17:06:07 -0700
From: Eugene Loh
Subject: Re: [OMPI users] OpenMPI deadlocks and race conditions ?
To: Open MPI Users
Fran?ois PELLEGRINI wrote:
I sometimes run into deadlocks in O
Well,
I spoke Gautam Chakrabarti at Pathscale. It seems the long and short of it is
that using OpenMP with C++ with a GNU3.3 (RHEL4) frontend creates some
limitations inside of pathCC. On a RHEL4 system, the compilier activates the
proper frontend for GCC 3.3, this is what creates the crash.
Hi all - I have a bizarre OpenMPI hanging problem. I'm running an MPI
code
called CP2K (related to, but not the same as cpmd). The complications
of the
software aside, here are the observations:
At the base is a serial code that uses system() calls to repeatedly
invoke
mpirun cp2k.pop
Hi Pavel
This is not my league, but here are some
CPMD helpful links (code, benchmarks):
http://www.cpmd.org/
http://www.cpmd.org/cpmd_thecode.html
http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html
IHIH
Gus Correa
Noam Bernstein wrote:
On May 18, 2009, at 12:50 PM, Pavel
There is another option here -- Fortran compilers can aggressively
move code around, particularly when it doesn't know about MPI inter-
function dependencies.
This usually only happens with non-blocking MPI communication
functions, though. Are you using those, perchance?
On May 18, 2009,
On May 18, 2009, at 12:50 PM, Pavel Shamis (Pasha) wrote:
Roman,
Can you please share with us Mvapich numbers that you get . Also
what is mvapich version that you use.
Default mvapich and openmpi IB tuning is very similar, so it is
strange to see so big difference. Do you know what kind of
Roman,
Can you please share with us Mvapich numbers that you get . Also what is
mvapich version that you use.
Default mvapich and openmpi IB tuning is very similar, so it is strange
to see so big difference. Do you know what kind of collectives operation
is used in this specific application.
Hi Roman
Note that in 1.3.0 and 1.3.1 the default ("-mca mpi_leave_pinned 1")
had a glitch. In my case it appeared as a memory leak.
See this:
http://www.open-mpi.org/community/lists/users/2009/05/9173.php
http://www.open-mpi.org/community/lists/announce/2009/03/0029.php
One workaround is to
I've been using --mca mpi_paffinity_alone 1 in all simulations. Concerning "-mca
mpi_leave_pinned 1", I tried it with openmpi 1.2.X versions and it
makes no difference.
Best regards
Roman
On Mon, May 18, 2009 at 4:57 PM, Pavel Shamis (Pasha) wrote:
>
>>
>> 1) I was told to add "-mca mpi_leave_
On May 14, 2009, at 3:20 PM, Valmor de Almeida wrote:
I guess another way to ask is: is it guaranteed that A and B are
contiguous?
Yes.
and the MPI communication correctly sends the data?
I'm not sure what you're asking, but the code looks as though it ought
to work.
Iain
Thanks for that comment.
I thought that is what I was doing when I used the full path name
/usr/local/openmpi-1.3/bin/mpif90
Is that not true?
John Boccio
On May 18, 2009, at 11:31 AM, Jeff Squyres wrote:
Check first to make sure you're using the mpif90 in /usr/local/
openmpi-1.3/bin -- OS X
Check first to make sure you're using the mpif90 in /usr/local/
openmpi-1.3/bin -- OS X ships with an Open MPI installation that does
not include F90 support. The default OS X Open MPI install may be in
your PATH before the Open MPI you just installed in /usr/local.
On May 18, 2009, at 10:
1) I was told to add "-mca mpi_leave_pinned 0" to avoid problems with
Infinband. This was with OpenMPI 1.3.1. Not
Actually for 1.2.X version I will recommend you to enable leave pinned
"-mca mpi_leave_pinned 1"
sure if the problems were fixed on 1.3.2, but I am hanging on to that
setting j
Hi,I need to use mpif90 for some work on a parallel cluster for galaxy-galaxy collision research.I am certainly not an expert in using UNIX to compile big packages like openmpi.I have list below all (I hope) relevant information and included output files(compressed) as an attachment.Thanks for any
17 matches
Mail list logo