On Thu, Apr 11, 2013 at 8:14 AM, 陈照云 <chenzhaoyu...@gmail.com> wrote:
> I have tested gromacs-4.6.1 with k20. But when I run the mdrun, I met some > problems. > > 1.Configure options are -DGMX_MPI=ON ,-DGMX_DOUBLE=ON -DGMX_GPU=OFF . > > But if I run parallely with mpirun, it would get wrong. > > "Note: file tpx version 58, software tpx version 83 > You've prepared your input file with some ancient version of grompp and run into problems with a modern version of mdrun trying to run on hardware that didn't exist when the code for your version of grompp was compiled. That's expected. Use a matching version of grompp. > > Fatal error in PMPI_Bcast: Invalid buffer pointer, error stack: > > PMPI_Bcast(2011): MPI_Bcast(buf=(nil), count=56, MPI_BYTE, root=0, > MPI_COMM_WORLD) failed > > PMPI_Bcast(1919): Null buffer pointer > > APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1) > > " > > 2. Configure options are -DGMX_MPI=ON ,-DGMX_GPU=ON -DGMX_DOUBLE=OFF . But > if I run with gpu, the program would get wrong. > > run one process with gpu: > > "Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single > precision) > > Note: file tpx version 73, software tpx version 83 > Same. And this beta version of GROMACS 4.5.1 should have been deleted in October 2010. There is no excuse for using it now with 4.6.1! Mark NOTE: GPU(s) found, but the current simulation can not use GPUs > > To use a GPU, set the mdp option: cutoff-scheme = Verlet > > (for quick performance testing you can use the -testverlet option) > > Using 1 MPI process > > 1 GPU detected on host node11: > > #0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible > > Back Off! I just backed up ener.edr to ./#ener.edr.4# > > starting mdrun 'Protein' > > -1 steps, infinite ps. > > Segmentation Fault (core dumped) > > run eight processes with gpu: > > Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision) > > Note: file tpx version 73, software tpx version 83 > > NOTE: GPU(s) found, but the current simulation can not use GPUs > > To use a GPU, set the mdp option: cutoff-scheme = Verlet > > (for quick performance testing you can use the -testverlet option) > > Non-default thread affinity set, disabling internal thread affinity > > Using 8 MPI processes > > 1 GPU detected on host node11: > > #0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible > > Back Off! I just backed up ener.edr to ./#ener.edr.6# > > starting mdrun 'Protein' > > -1 steps, infinite ps. > > APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)" > > Thanks for your help! > -- > gmx-users mailing list gmx-users@gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-users > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/Search before posting! > * Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-users-requ...@gromacs.org. > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > -- gmx-users mailing list gmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists