Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Mark Abraham
On 4/28/2011 4:44 AM, Hrachya Astsatryan wrote: Dear Roland, We need to run the GROMACS on the base of the nodes of our cluster (in order to use all computational resources of the cluster), that's why we need MPI (instead of using thread or OpenMP within the SMP node). I can run simple MPI exa

Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Hrachya Astsatryan
Dear Roland, We need to run the GROMACS on the base of the nodes of our cluster (in order to use all computational resources of the cluster), that's why we need MPI (instead of using thread or OpenMP within the SMP node). I can run simple MPI examples, so I guess the problem on the implementat

Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Roland Schulz
This seems to be a problem with your MPI library. Test to see whether other MPI programs don't have the same problem. If it is not GROMACS specific please ask on the mailinglist of your MPI library. If it only happens with GROMACS be more specific about what your setup is (what MPI library, what ha

Re: [gmx-users] Help: Gromacs Installation

2011-04-27 Thread Hrachya Astsatryan
Dear Mark Abraham & all, We used another benchmarking systems, such as d.dppc on 4 processors, but we have the same problem (1 proc use about 100%, the others 0%). After for a while we receive the following error: Working directory is /localuser/armen/d.dppc Running on host wn1.ysu-cluster.gr

Re: [gmx-users] Help: Gromacs Installation

2011-04-22 Thread Mark Abraham
On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote: Dear all, I would like to inform you that I have installed the gromacs4.0.7 package on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4 Scientific Linux) with the following steps: yum install fftw3 fftw3-devel ./configure --prefix=

[gmx-users] Help: Gromacs Installation

2011-04-22 Thread Hrachya Astsatryan
Dear all, I would like to inform you that I have installed the gromacs4.0.7 package on the cluster (nodes of the cluster are 8 core Intel, OS: RHEL4 Scientific Linux) with the following steps: yum install fftw3 fftw3-devel ./configure --prefix=/localuser/armen/gromacs --enable-mpi Also I hav