Thanks for your comments (and sorry for the late answer)
Actually, I was not primarily thinking about using OpenMosix to run
simulation (that would be stupid considering the size of our cluster).
But it could be valuable to have a tool like OpenMosix when you quickly
want to setup a system (i.e. run severla minimization) and don't want to
be bored by the file transfer and the queuing system. If you say that
Gromacs can run successfully with OpenMosix, it's enough for me. I just
didn't want to spend hours to setup the software and realize after that
it can't run.
thanks
Nico
Yang Ye wrote:
OpenMosix indeed provides a second model for clustering besides
beuwolf model, which uses MPI, PVM-based communication. You may run
thousands of processes and get good scaling.
I tried it with normal network, 100M Ethernet, two processes, quite
bad efficiency, 1.1-1.3x speed-up. To get OpenMosix running for high
performance, a good networking is necessary. IMHO, I don't think
OpenMosix will help in the context of gromacs or any MPI-based
parallel program because it adds another level of control over
inter-process communication which can be more directly managed by MPI
library itself. Plus, without OpenMosix, processes are fixed to be run
on certain nodes without concerns whether to migrate or not. Unless
you don't have a dedicated cluster for Gromacs, you may consider.
OpenMosix is more fit to be used for
1) building a computing farm with minimal client installation.
2) dynamic environment. The computing farm can withstand dynamic
addition and withdrawn. Nodes can be loaded to run other jobs.
3) Combing coLinux and OpenMosix and running background on
Windows-based computer
4) huge amount of processes, each of them is running for sufficient
amount of time for OpenMosix to migrate.
5) Next-generation software to come out...
Regards,
Yang Ye
On 6/30/2007 7:17 PM, David van der Spoel wrote:
Nicolas Sapay wrote:
Actually, I have used OpenMosix some time ago in a another life :)
All I remember is that NAMD was performing reasonably well as far as
you didn't ask it to write coordinates every frames. It was handy to
minimize/equilibrate a system or do some small computation like
pressure profile. As far as I remember, I was able to compute
something like 0.8 ns/day on 10 proc for a system of 30~20000 atoms.
there is nothing stopping you from trying. In order for it to be
efficient you probably need many more processes than processors, and
IIRC that is how NAMD works. It will definitely not be more efficient
than the gromacs development code with MPI though.
Nico
Sabuj Pattanayek wrote:
My 0.02. Based on one of Moshe Bar's OpenMosix papers, Mosix increases
the performance of MPI processes by load balancing. However I would
have
to say that this only works for compute bound processes. If there's
going to be lots of I/O between nodes Mosix won't do so well and may
very well degrade performance.
Did you run any benchmarks of NAMD + its parallel (CHARM?)
implementation vs NAMD + CHARM + Mosix? Does it help? Last I
checked the
Mosix userland tools and some other key functionality still don't work
for 2.6 linux kernels.
Nicolas Sapay wrote:
Hi everybody,
I wonder if someone has some experience with Gromacs and
OpenMosix. Has
someone tried recently to run Gromacs in parallel with an OpenMosix
cluster? I have read some messages about that on the mailing-list but
they have been posted several years ago... Things may have changed
now.
Additionally, I have already used sucessfully NAMD and OpenMosix
(but
NAMD uses its own tool for parallel process).
Thanks
Nico
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before
posting!
Please don't post (un)subscribe requests to the list. Use the www
interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
[ Nicolas SAPAY, Ph.D. ]
University of Calgary, Dept. of Biological Sciences
2500 University drive NW, Calgary AB, T2N 1N4, Canada
Tel: (403) 220-6869
Fax: (403) 289-9311
_______________________________________________
gmx-users mailing list gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php