id you try the solutions given to
> http://www.gromacs.org/pipermail/gmx-users/2006-August/023545.html ?
>
> Regards
>
> Christian
>
>
>
> liu xin schrieb:
> > Dear GMXers
> > I got the error message " cannot compute sizeof (int)" when I tried to
&
Hi guys,
just a short question, this is how I run a 8np mpi-mdrun job on my machine:
poe /home/usr/programs/gromacs331mpi/bin/mdrun -v -s 8np.tpr --procs 8
-nodes 8 -task_per_node 1 -single_thread yes -eager_limit 65535
-shared_memory
but it is much slower than the 1np mode, I havent used poe-mpi
Dear GMXers
I got the error message " cannot compute sizeof (int)" when I tried to
configure GMX3.3.1 on IBM PPC p575 AIX5.3, this is how I did it:
for fftw2.1.5
export CC="xlc_r -q64 -qhot"
export CXX="xlC_r -q64 -qhot"
export OBJECT_MODE=64
export FFLAGS="-O3 -qarch=pwr4 -qtune=pwr4 -qmaxmem=-1"
Hi Mark
After some search on the web, I found that in AIX OS mpcc is the MPI c
compiler, while mpCC is the MPI C++ compiler and mpxlf is the MPI fortran
compiler.
but when I tried to configure it like: " configure --enable-mpi
MPICC=/usr/bin/mpcc", I still got error complaining "Cannot compile and
mportant). It should be set
> to the name of your mpi-enabled C compiler.
> Cheers,
>
> ERik
>
> On Oct 18, 2007, at 4:40 AM, liu xin wrote:
>
> Hi Erick
> you mean export MPICC=mpcc? ok, I will try that
>
>
>
> On 10/18/07, Erik Lindahl <[EMAIL PROTECTED]> wr
Hi Erick
you mean export MPICC=mpcc? ok, I will try that
On 10/18/07, Erik Lindahl <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> On Oct 17, 2007, at 7:13 PM, liu xin wrote:
>
> > Thanks for your quick comment David
> > but if I tried
> > ./configure --enable-m
Thanks for your quick comment David
but if I tried
./configure --enable-mpi --prefix=/hpc/gromacsmpi
it will complain about cant find MPI compiler, but I've already export
mpcc=mpicc
On 10/18/07, David van der Spoel <[EMAIL PROTECTED]> wrote:
>
> liu xin wrote:
> > Hello e
Hello everyone
I tried to install gromacs-3.3.1 on our IBM PowerPC with AIX 5.3 OS, this is
how I've done it:
./configure --enable-mpi=/usr/bin --prefix=/hpc/gromacsmpi
make mdrun
make install-mdrun
then I tried to mdrun my system with 6 cpu
grompp -f md.mdp -c -p -o 6np.tpr -np 6 -shuffle -sort
ny more because I find LAM better. Standard procedure,
> turn up the verbosity on everything, check the outputs, re-read the relevant
> manuals.
>
> ----- Original Message
> From: liu xin <[EMAIL PROTECTED]>
> To: Discussion list for GROMACS users
> Sent: Tuesday,
----- Original Message
> From: liu xin < [EMAIL PROTECTED]>
> To: Discussion list for GROMACS users
> Sent: Monday, October 8, 2007 1:51:12 PM
> Subject: [gmx-users] only one cpu "works" in my linux cluster
>
> Dear GMXers
>
> this is how I've done
Dear GMXers
this is how I've done it so far:
grompp -f -c -p -o 12np.tpr -np 12
qsub -l node=6 12np.sh (/home/me/mpich2/bin/mpirun -np 12 mdrun -s
12np.tpr-np 12 )
then it seems my mdrun works fine, but when I ssh to each node to check the
cpu efficiency with "top", I find that there's only one cp
ok.
On 10/1/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
>
> liu xin wrote:
> > Thanks Mark
> >
> > But there's no standard error output at all for my problem,
>
> Are you running a batch job in a queueing system and explicitly or
> implicitly asking t
e you gave to me
On 10/1/07, Florian Haberl <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> On Sunday, 30. September 2007 19:12, liu xin wrote:
> > Thanks Mark
> >
> > But there's no standard error output at all for my problem, it seems
> mdrun
> > stagn
ECTED]> wrote:
>
> liu xin wrote:
> > Dear GMXers
> >
> > my mdrun stops when I try to do it with 8 nodes, but there's no error
> > message, here's the end of the md0.log:
>
> The log file won't be helpful if the problem is outside of GROMACS, and
&
Dear GMXers
my mdrun stops when I try to do it with 8 nodes, but there's no error
message, here's the end of the md0.log:
"B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
--
Dear GMX-users:
I've searched the list but couldnt find something usful for me, and this is
how I did it so far :
source /opt/intel/fce/version/bin/ifortvars.sh
export F77=ifort
export CPPFLAGS=-I/home/xin/programs/fftw312/include
export LDFLAGS=-L/home/dong/programs/fftw312/lib
configure --prefi
Hi Mark
I checked my mdp file, the flags for nstvout and nstfout are all zero, this
must be the problem.
Thank you for your suggestions!
Yours
Xin
On 4/18/07, Mark Abraham <[EMAIL PROTECTED]> wrote:
liu xin wrote:
> Hi GMX users
>
> My 2ns simulation crashed accidentally, g
Hi GMX users
My 2ns simulation crashed accidentally, gmxcheck said my previous simulation
was stopped at 1400 ps, then I used the tpbconv to generate a new tpr to
restart it from the previous ending point:
"tpbconv -f -e -s -o new.tpr"
but I found the output message of tpbconv said:
"100 step
Hello GMX-usersWhen I wanted to analysis the hbond interaction between the protein and the dppc headgroup, I got the following error:Program g_hbond, VERSION 3.3.1Source code file: gmx_hbond.c, line: 631
Fatal error:Your computational box has shrunk too much.g_hbond can not handle this situation, s
TED]>To: "Discussion list for GROMACS users" <
gmx-users@gromacs.org>Sent: Saturday, September 30, 2006 2:37 AMSubject: Re: [gmx-users] HBond frequency> liu xin wrote:>> Thanks, but when ever I used ACDsee to visualize the generated
hbmap.xpm,>> I've got nothi
ham" <[EMAIL PROTECTED]>To: "Discussion list for GROMACS users" <
gmx-users@gromacs.org>Sent: Friday, September 29, 2006 1:28 AMSubject: Re: [gmx-users] HBond frequency> liu xin wrote:>> Hello GMX users:>>>> Just a quick question: how can I get the frequency of
Hello GMX users:Just a quick question: how can I get the frequency of each HBond?From hbond.ndx we can get a list of hbonds, e.g. hbonds between protein and drug, if I want to check the frequency of each hbond between the two components respectively, how can I realize it?
Thanks Chris, I really appreciate your help!I've got my extended membrane system regarding your note, and now I'm testing it with my GPCR. I am still not very familiar with the script, I think it will take some time to learn.
Thanks again!On 9/13/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]
> wrote:>If
If I solvate my GPCR into the DPPC128 system using genbox, there are only about 50 lipids left, which I think are too few, so I want a larger starting structure.According to your note, when I did step 3, loaded my DPPC183 system to VMD, I found the edges lined up poorly, there is are gaps between t
Hi ChrisThank you for your note!According to your suggestion, I did a energy minimization and then a MDS with "freezegrps=SOL, freezedim=N, N, Y", the rest of the system were simulated with no constraint. This time I use semiisotropic pressure coupling with tau_p=5. The system will be equilibrated
Hello Chris
So you mean to do a constrained MDS to let the lipids "fill the gaps"
between the box edges and lipids, after that do a unconstrained MDS,
then we'll get a fine structure, am I right?
Thank you very much, I'll try that
Xin Liu
On 9/7/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Hi ChrisThank you for your suggestions!These days I also tried anisotropic and semiisotropic for my simulation, but I still got the same result, so I wonder maybe there's something wrong with my initial structure, but it will take time to check it out.
I'm trying to use the method you suggested, I
Dear GMX-users:I met a problem when doing simulation with the membrane , here's what I've done so far:I download the dppc128.pdb file and dppc.top files from Dr. Tielman's website, equilibrated this dppc128 system for 10ns, I checked the final structure with VMD, everything seemed ok. Then I extend
Dear GROMACS UsersI'm doing a simulation of a protein-membrane-aqueous system, I'm not very familiar with the .mdp files of this kind, so, could anybody kind enough send me two sample.mdp files for position restraint and molecular dynamic? The membrane type I chosen is DPPC.
Thanks in advance!
29 matches
Mail list logo