Hi,

Although the question is a bit fuzzy, I might be able to give you a
useful answer.

>From what I see on the whitepaper of the Poweredge m710 baldes, among
other (not so interesting :) OS-es, Dell provides the options of Red
Had or SUSE Linux as factory installed OS-es. If you have any of
these, you can rest assured that Gromacs will run just fine -- on a
single node.

Parallel runs are little bit different story and depends on the
interconnect. If you have Infiniband, than you'll have a very good
scaling over multiple nodes. This is true especially if it's the I/O
cards are the Mellanox QDR-s.

Cheers,
--
Szilárd


On Tue, Jan 18, 2011 at 4:48 PM, Maryam Hamzehee
<maryam_h_7...@yahoo.com> wrote:
>
> Dear list,
>
> I will appreciate it if I can get your expert opinion on doing 
> parallel computation (I will use GROMACS and AMBER molecular mechanics 
> packages and some other programs like CYANA, ARIA and CNS to do 
> structure calculations based on NMR experimental data) using a cluster based 
> on Dell PowerEdge M710 with Intel Xeon 5667 processor architecture which
> apparently each blade has two quad-core cpus. I was wondering if I can get 
> some information about LINUX compatibility and parallel computation on this 
> system.
> Cheers,
> Maryam
>
> --
> gmx-users mailing list    gmx-users@gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-requ...@gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to