Mathieu Gontier wrote:
Dear OpenMPI users
I am dealing with an arithmetic problem. In fact, I have two variants
of my code: one in single precision, one in double precision. When I
compare the two executable built with MPICH, one can observed an
expected difference of performance: 115.7-se
Hi,
First of all thanks for your insight !
*Do you get a corefile?*
I don't get a core file, but I get a file called _FIL001. It doesn't contain
any debugging symbols. It's most likely a digested version of the input file
given to the executable : ./myexec < inputfile.
*there's no line numbers p
Hi Benjamin
I would just rebuild OpenMPI withOUT the compiler flags that change the standard
sizes of "int" and "float" (do a "make cleandist" first!), then recompile your
program,
and see how it goes.
I don't think you are gaining anything by trying to change the standard
"int/integer" and
"re
Hi Daofeng
It is hard to tell what is happening in the Infiniband side of the problem.
Did somebody perhaps remove the Infiniband card from this machine?
Was it ever there?
Did somebody perhaps changed the Linux kernel modules that are loaded
(perhaps changing /etc/module.config or similar)?
Maybe
On 12/5/2010 3:22 PM, Gustavo Correa wrote:
I would just rebuild OpenMPI withOUT the compiler flags that change the standard
sizes of "int" and "float" (do a "make cleandist" first!), then recompile your
program,
and see how it goes.
I don't think you are gaining anything by trying to change th
Unfortunately DRAGON is old FORTRAN77. Integers have been used instead of
pointers. If I compile it in 64bits without -f-default-integer-8, the
so-called pointers will remain in 32bits. Problems could also arise from its
data structure handlers.
Therefore -f-default-integer-8 is absolutely necessa
hi,
I met a question recently when I tested the MPI_send and MPI_Recv
functions. When I run the following codes, the processes hanged and I
found there was not data transmission in my network at all.
BTW: I finished this test on two X86-64 computers with 16GB memory and
installed Linux.
1 #in
On 12/5/2010 7:13 PM, 孟宪军 wrote:
hi,
I met a question recently when I tested the MPI_send and MPI_Recv
functions. When I run the following codes, the processes hanged and I
found there was not data transmission in my network at all.
BTW: I finished this test on two X86-64 computers with 16GB me
hi,
In my computers(X86-64), the sizeof(int)=4, but the
sizeof(long)=sizeof(double)=sizeof(size_t)=8. when I checked my mpi.h file,
I found that the definition about the sizeof(int) is correct. meanwhile, I
think the mpi.h file was generated according to my compute environment when
I compiled the