Hi Carsten,

Thanks for your suggestion! But because my simulation will be run for
about 200ns, 10ns per day(24 hours is the maximum duration for one
single job on the Cluster I am using), which will generate about 20
trajectories!

Can anyone find the reason causing such error?

regards,
Baofu Qiao


On 11/26/2010 09:07 AM, Carsten Kutzner wrote:
> Hi,
>
> as a workaround you could run with -noappend and later
> concatenate the output files. Then you should have no
> problems with locking.
>
> Carsten
>
>
> On Nov 25, 2010, at 9:43 PM, Baofu Qiao wrote:
>
>   
>> Hi all,
>>
>> I just recompiled GMX4.0.7. Such error doesn't occur. But 4.0.7 is about 30% 
>> slower than 4.5.3. So I really appreciate if anyone can help me with it!
>>
>> best regards,
>> Baofu Qiao
>>
>>
>> 于 2010-11-25 20:17, Baofu Qiao 写道:
>>     
>>> Hi all,
>>>
>>> I got the error message when I am extending the simulation using the 
>>> following command:
>>> mpiexec -np 64 mdrun -deffnm pre -npme 32 -maxh 2 -table table -cpi pre.cpt 
>>> -append 
>>>
>>> The previous simuluation is succeeded. I wonder why pre.log is locked, and 
>>> the strange warning of "Function not implemented"?
>>>
>>> Any suggestion is appreciated!
>>>
>>> *********************************************************************
>>> Getting Loaded...
>>> Reading file pre.tpr, VERSION 4.5.3 (single precision)
>>>
>>> Reading checkpoint file pre.cpt generated: Thu Nov 25 19:43:25 2010
>>>
>>> -------------------------------------------------------
>>> Program mdrun, VERSION 4.5.3
>>> Source code file: checkpoint.c, line: 1750
>>>
>>> Fatal error:
>>> Failed to lock: pre.log. Function not implemented.
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> -------------------------------------------------------
>>>
>>> "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun on CPU 0 out of 64
>>>
>>> gcq#147: "It Doesn't Have to Be Tip Top" (Pulp Fiction)
>>>
>>> --------------------------------------------------------------------------
>>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>>> with errorcode -1.
>>>
>>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>>> You may or may not see output from other processes, depending on
>>> exactly when Open MPI kills them.
>>> --------------------------------------------------------------------------
>>> --------------------------------------------------------------------------
>>> mpiexec has exited due to process rank 0 with PID 32758 on
>>>
>>>       
>> -- 
>> gmx-users mailing list    gmx-users@gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the 
>> www interface or send it to gmx-users-requ...@gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>     
>
>
>
>
>   


-- 
************************************
 Dr. Baofu Qiao
 Institute for Computational Physics
 Universität Stuttgart
 Pfaffenwaldring 27
 70569 Stuttgart

 Tel: +49(0)711 68563607
 Fax: +49(0)711 68563658

-- 
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to