HI MArtin,
I am using here the infiniband having speed more than 10 gbps..Can you
suggest some option to scale better in this case.

With Thanks,
Vivek

2008/11/11 Martin Höfling <[EMAIL PROTECTED]>

> Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:
>
>
> > I have also tried scaling gromacs for a number of nodes ....but was not
> > able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per
>
> As mentioned before, performance strongly depends on the type of
> interconnect
> you're using between your processes. Shared Memory, Ethernet, Infiniband,
> NumaLink, whatever...
>
> I assume you're using ethernet (100/1000 MBit?), you can tune here to some
> extend as described in:
>
> Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.;
> Groot,
> B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency
> networks Journal of Computational Chemistry, 2007
>
> ...but be aware that principal limitations of ethernet remain. To come
> around
> this, you might consider to invest in the interconnect. If you can come out
> with <16 cores, shared memory nodes will give you the "biggest bang for the
> buck".
>
> Best
>           Martin
> _______________________________________________
> gmx-users mailing list    gmx-users@gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [EMAIL PROTECTED]
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
_______________________________________________
gmx-users mailing list    gmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to