Thank you Mark, for your valuable suggestion.
Regards,
Bharati
On Wed, May 4, 2011 at 10:18 AM, Mark Abraham wrote:
>
>
> On 04/05/11, *Bharati Singh * wrote:
>
> Hi Mark,
>
> Sorry for your inconvenience
> As you said some kind of (dynamic) linking problem, Is it possible to
> resolve it?
>
>
Hi lina,
As you have given, Is it needed to set LD_LIBRARY_PATH ? I had loaded module
for mpi and set LDFLAGS, CPPFLAGS ,at the time of installation.
My cluster has LSF, I use following command to submit job-
$ bsub -q -n mpirun -srun
/sfs3/home/bharati/gromacs/bin/mdrun -pd -v -s Ag6A12_equi
On 04/05/11, Bharati Singh wrote:
> Hi Mark,
>
> Sorry for your inconvenience
> As you said some kind of (dynamic) linking problem, Is it possible to resolve
> it?
>
Yes, but you will need someone who can troubleshoot what has and has not worked
on your system. We do not have any relevant
Hi Mark,
Sorry for your inconvenience
As you said some kind of (dynamic) linking problem, Is it possible to
resolve it?
Thanks & Regards,
Bharati
On Tue, May 3, 2011 at 6:59 PM, Mark Abraham wrote:
> On 3/05/2011 8:19 PM, Bharati Singh wrote:
>
> Mentioned method is working in another user's
On 3/05/2011 8:19 PM, Bharati Singh wrote:
Mentioned method is working in another user's home directory on same
machine, then I dont think the problem with linking.
Please be specific... I can think of three different things you might
mean by the "mentioned method", and when you've not troubl
Hi,
I don't know the specific cluster you mentioned.
but when I met such problems, I used to check the .log which usually
gave me very helpful information.
and I used to use some script to submit job,
I post one as example, please set the proper one according to your cluster.
more openmpi_paral
Mentioned method is working in another user's home directory on same
machine, then I dont think the problem with linking.
On Tue, May 3, 2011 at 3:34 PM, Mark Abraham wrote:
> On 3/05/2011 7:37 PM, Bharati Singh wrote:
>
> Hi Team,
>
> Thanks for your reply.
>
> I had tried following method t
On 3/05/2011 7:37 PM, Bharati Singh wrote:
Hi Team,
Thanks for your reply.
I had tried following method to install gromacs-4.0.7 -
$ module load intel_all/impi/default
$ ./configure --enable-mpi --with-fft=fftw2
LDFLAGS="-L/sfs1/lib/sfftw-2.1.5/lib/"
CPPFLAGS="-I/sfs1/lib/sfftw-2.1.5/includ
Hi Team,
Thanks for your reply.
I had tried following method to install gromacs-4.0.7 -
$ module load intel_all/impi/default
$ ./configure --enable-mpi --with-fft=fftw2
LDFLAGS="-L/sfs1/lib/sfftw-2.1.5/lib/"
CPPFLAGS="-I/sfs1/lib/sfftw-2.1.5/include" LIBS="-lsfftw" F77=mpif77
--prefix=/sfs3/hom
Hi,
Have you checked the error log? Please post the error log.
Have you configured GROMACS using --enable-mpi?
Thanks,
Saikat
On Tue, May 3, 2011 at 2:35 PM, Bharati Singh wrote:
> Hi Team,
>
> I have LSF in my cluster ,I have installed gromacs-4.0.7 on Sampige. It is
> working fine as a seri
Hi Team,
I have LSF in my cluster ,I have installed gromacs-4.0.7 on Sampige. It is
working fine as a serial(on 1 processor) ,when I submit the job for more
than one processors in queue, it gets terminated immediately.Can you suggest
me something about it, please
Thanks & Regards,
--
Bharati Si
11 matches
Mail list logo