Hi Diego,
I don't know what CPU/compiler you are using and what -r8
option means, but DISPLACEMENTS(2) and DISPLACEMENTS(3) is
incorrect if integer is 4 bytes and real is 8 bytes.
In this case, usually there is a gap between ip and RP.
See description about datatype alignment in the MPI Standard.
Hi Ralph Castain,
Thanks very much for your reply. I am using libhdfs, a C API to HDFS. I
would ask hadoop guys for help.
On Fri, Oct 3, 2014 at 12:14 AM, Ralph Castain wrote:
> HmmmI would guess you should talk to the Hadoop folks as the problem
> seems to be a conflict between valgrind an
Hi Gus,
Thanks for the suggestions!
I know that QCSCRATCH and QCLOCALSCR are not the problem. When I set
QCSCRATCH="." and unset QCLOCALSCR it writes all the scratch files to the
current directory, which is the behavior I want. The environment variables are
correctly passed in the mpirun c
Hi Lee-Ping
Computational Chemistry is Greek to me.
However, on pp. 12 of the Q-Chem manual 3.2
(PDF online
http://www.q-chem.com/qchem-website/doc_for_web/qchem_manual_3.2.pdf)
there are explanations of the meaning of QCSCRATCH and
QLOCALSRC, etc, which as Ralph pointed out, seem to be a s
Hi Ralph,
I've been troubleshooting this issue and communicating with Blue Waters
support. It turns out that Q-Chem and OpenMPI are both trying to open
sockets, and I get different error messages depending on which one fails.
As an aside, I don't know why Q-Chem needs sockets of its own to
Dear all.
I have some problem with MPI_TYPE_CREATE_STRUCT and as a consequence
with SENDRECV.
I have this variable type
*type particle*
*integer :: ip*
* real :: RP(2)*
* real :: QQ(4)*
*end type particle*
When I compile in double precision with:
*mpif90 -r8 -fpp -DPARALLEL *.f90 *
So when I c
HmmmI would guess you should talk to the Hadoop folks as the problem seems
to be a conflict between valgrind and HDFS. Does valgrind even support Java
programs? I honestly have never tried to do that before.
On Oct 2, 2014, at 4:40 AM, XingFENG wrote:
> Hi there,
>
> I am using valgrind
Hi there,
I am using valgrind to help analyse my MPI program.
I used hdfs file system to read/write data. And if I run the code without
valgrind, it works correctly. However, if I run with valgrind, for example,
*mpirun -np 3 /usr/bin/valgrind --tool=callgrind ./myprogram /input_file
/output_fi