Albert wrote:
  Hello:
  here is my log file for mdrun:
Writing final coordinates.
step 100000, remaining runtime: 0 s
 Average load imbalance: 10.8 %
 Part of the total run time spent waiting due to load imbalance: 4.3 %
Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 19 % Z 0 %
 Average PME mesh/force load: 0.665
 Part of the total run time spent waiting due to PP/PME imbalance: 5.7 %

NOTE: 5.7 % performance was lost because the PME nodes
      had less work to do than the PP nodes.
      You might want to decrease the number of PME nodes
      or decrease the cut-off and the grid spacing.


NOTE: 9 % of the run time was spent communicating energies,
      you might want to use the -gcom option of mdrun


        Parallel run - timing based on wallclock.

               NODE (s)   Real (s)      (%)
       Time:   2435.554   2435.554    100.0
                       40:35
               (Mnbf/s)   (GFlops)   (ns/day)  (hour/ns)
Performance:    409.701     22.103      7.095      3.383

gcq#149: "It's Against the Rules" (Pulp Fiction)



As we can see from the end of this log file, the performance is 7.1ns/day, 3.4ns/hour. I am very confused about this output. How could this happen? Is there only two hours something each day?


I don't understand the problem.  7.095 ns/day * 3.383 hr/ns = 24.002385 hr/day.

-Justin

--
========================================

Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to