Hi,
On Apr 21, 2009, at 5:53 PM, sheerychen wrote:
yes, both versions are compiled as mpi version. However the start
mpi messages are different. For MPICH, it would show that 1D domain
decomposition like 3*1*1, and only 1 file would be produced. However
for MPICH2, no such information appe
On Apr 21, 2009, at 5:04 PM, sheerychen wrote:
Hello, every body. I have a question about parallel running of
mdrun_mpi. I doubt that sometimes the parallel running of mdrun_mpi
can not utilize the domain decomposition.
This is the case when I use the batch work in the computer cluster
yes, both versions are compiled as mpi version. However the start mpi
messages are different. For MPICH, it would show that 1D domain
decomposition like 3*1*1, and only 1 file would be produced. However for
MPICH2, no such information appears and it would produce many files as (8
nodes):
complex_em
Am 21.04.2009 um 17:04 schrieb sheerychen:
...
However, in my person computer which install the MPICH (not MPICH
2), I use the commond like this ''mpirun -np 3 /usr/bin/mdrun_mpi -
deffnm *** -v''. It would show the domain decomposition and the
speed is quicked than 8 CPUS.
What is the pro
> Hello, every body. I have a question about parallel running of mdrun_mpi. I
> doubt that sometimes the parallel running of mdrun_mpi can not utilize the
> domain decomposition.
> This is the case when I use the batch work in the computer cluster which
> install the MPICH2. In the case, I use comm
5 matches
Mail list logo