"-nt" is mostly a backward compatibility option and sets the total
number of threads (per rank). Instead, you should set both "-ntmpi"
(or -np with MPI) and "-ntomp". However, note that unless a single
mdrun uses *all* cores/hardware threads on a node, it won't pin the
threads to cores. Failing to pin threads will lead to considerable
performance degradation; just tried and depending on how (un)lucky the
thread placement and migration is, I get 1.5-2x performance
degradation with running two mdrun-s on a single dual-socket node
without pining threads.

My advise is (yet again) that you should check the
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
wiki page, in particular the section on how to run simulations. If
things are not, clear please ask for clarification - input and
constructive criticism should help us improve the wiki.

We have been patiently pointing everyone to the wiki, so asking
without reading up first is neither productive nor really fair.

Cheers,
--
Szilárd


On Tue, Jun 4, 2013 at 11:22 AM, Chandan Choudhury <iitd...@gmail.com> wrote:
> Hi Albert,
>
> I think using -nt flag (-nt=16) with mdrun would solve your problem.
>
> Chandan
>
>
> --
> Chandan kumar Choudhury
> NCL, Pune
> INDIA
>
>
> On Tue, Jun 4, 2013 at 12:56 PM, Albert <mailmd2...@gmail.com> wrote:
>
>> Dear:
>>
>>  I've got four GPU in one workstation. I am trying to run two GPU job with
>> command:
>>
>> mdrun -s md.tpr -gpu_id 01
>> mdrun -s md.tpr -gpu_id 23
>>
>> there are 32 CPU in this workstation. I found that each job trying to use
>> the whole CPU, and there are 64 sub job when these two GPU mdrun submitted.
>>  Moreover, one of the job stopped after short of running, probably because
>> of the CPU issue.
>>
>> I am just wondering, how can we distribute CPU when we run two GPU job in
>> a single workstation?
>>
>> thank you very much
>>
>> best
>> Albert
>> --
>> gmx-users mailing list    gmx-users@gromacs.org
>> http://lists.gromacs.org/**mailman/listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
>> * Please search the archive at http://www.gromacs.org/**
>> Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before
>>  posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-requ...@gromacs.org.
>> * Can't post? Read 
>> http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to