You need to contact your cluster administrator for instructions of how to submit jobs to the cluster. Usually you have to create some kind of shell-script that specifies various parameters of your job and then submit it to a queue system.
Below you submitted the job most likely to the 'head-node', the node you login first to work the cluster. Andreas >-----Original Message----- >From: gmx-users-boun...@gromacs.org [mailto:gmx-users- >boun...@gromacs.org] On Behalf Of pratibha kapoor >Sent: 07 October 2013 14:46 >To: gmx-users@gromacs.org >Subject: [gmx-users] Re: parallel simulation > >To add : I am running simulations on institute cluster with 8 nodes (2 cores >each). >Please suggest me the way so that I can run one simulation on all available >nodes, cores and threads. >Thanks in advance. > > > >On Mon, Oct 7, 2013 at 1:55 PM, pratibha kapoor ><kapoorpratib...@gmail.com>wrote: > >> I would like to run one simulation in parallel so that it utilises all >> the available nodes and cores. For that, I have compiled gromacs with >> mpi enabled and also installed openmpi on my machine. >> I am using the following command: >> mpirun -np 4 mdrun_mpi -v -s *.tpr >> >> When i use top command, I get: >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> >> 22449 root 20 0 107m 59m 3152 R 25 2.9 0:05.42 >> mdrun_mpi >> 22450 root 20 0 107m 59m 3152 R 25 2.9 0:05.41 >> mdrun_mpi >> 22451 root 20 0 107m 59m 3152 R 25 2.9 0:05.41 >> mdrun_mpi >> 22452 root 20 0 107m 59m 3152 R 25 2.9 0:05.40 >> mdrun_mpi >> >> Similarly when i use mpirun -np 2 mdrun_mpi -v -s *.tpr, I get >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> 22461 root 20 0 108m 59m 3248 R 50 3.0 5:58.64 >> mdrun_mpi >> 22462 root 20 0 108m 59m 3248 R 50 3.0 5:58.56 >> mdrun_mpi >> >> If I look at %CPU column, it is actually 100/(no. of processes) Why is >> all the cpu not 100% utilised? >> Also if I compare my performance, it is significantly hampered. >> Please suggest me the way so that I can run one simulation on all >> available nodes, cores and threads. >> Thanks in advance. >> >-- >gmx-users mailing list gmx-users@gromacs.org >http://lists.gromacs.org/mailman/listinfo/gmx-users >* Please search the archive at >http://www.gromacs.org/Support/Mailing_Lists/Search before posting! >* Please don't post (un)subscribe requests to the list. Use the www interface >or send it to gmx-users-requ...@gromacs.org. >* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists -- gmx-users mailing list gmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists