I have tried:

mpirun --mca btl openib,self -hostfile $PBS_NODEFILE -n 16  xhpl  > xhpl.out

and

mpirun -hostfile $PBS_NODEFILE -n 16  xhpl  > xhpl.out

How do I run "sanity checks, like OSU latency and bandwidth benchmarks
between the nodes?" I am not superuser. Thanks,

Ron

---
Ron Cohen
recoh...@gmail.com
skypename: ronaldcohen
twitter: @recohen3


On Wed, Mar 23, 2016 at 9:28 AM, Joshua Ladd <jladd.m...@gmail.com> wrote:
> Hi, Ron
>
> Please include the command line you used in your tests. Have you run any
> sanity checks, like OSU latency and bandwidth benchmarks between the nodes?
>
> Josh
>
> On Wed, Mar 23, 2016 at 8:47 AM, Ronald Cohen <recoh...@gmail.com> wrote:
>>
>> Thank  you! Here are the answers:
>>
>> I did not try a previous release of gcc.
>> I built from a tarball.
>> What should I do about the iirc issue--how should I check?
>> Are there any flags I should be using for infiniband? Is this a
>> problem with latency?
>>
>> Ron
>>
>>
>> ---
>> Ron Cohen
>> recoh...@gmail.com
>> skypename: ronaldcohen
>> twitter: @recohen3
>>
>>
>> On Wed, Mar 23, 2016 at 8:13 AM, Gilles Gouaillardet
>> <gilles.gouaillar...@gmail.com> wrote:
>> > Ronald,
>> >
>> > did you try to build openmpi with a previous gcc release ?
>> > if yes, what about the performance ?
>> >
>> > did you build openmpi from a tarball or from git ?
>> > if from git and without VPATH, then you need to
>> > configure with --disable-debug
>> >
>> > iirc, one issue was identified previously
>> > (gcc optimization that prevents the memory wrapper from behaving as
>> > expected) and I am not sure the fix landed in v1.10 branch nor master
>> > ...
>> >
>> > thanks for the info about gcc 6.0.0
>> > now this is supported on a free compiler
>> > (cray and intel already support that, but they are commercial
>> > compilers),
>> > I will resume my work on supporting this
>> >
>> > Cheers,
>> >
>> > Gilles
>> >
>> > On Wednesday, March 23, 2016, Ronald Cohen <recoh...@gmail.com> wrote:
>> >>
>> >> I get 100 GFLOPS for 16 cores on one node, but 1 GFLOP running 8 cores
>> >> on two nodes. It seems that quad-infiniband should do better than
>> >> this. I built openmpi-1.10.2g with gcc version 6.0.0 20160317 . Any
>> >> ideas of what to do to get usable performance? Thank you!
>> >>
>> >> bstatus
>> >> Infiniband device 'mlx4_0' port 1 status:
>> >>         default gid:     fe80:0000:0000:0000:0002:c903:00ec:9301
>> >>         base lid:        0x1
>> >>         sm lid:          0x1
>> >>         state:           4: ACTIVE
>> >>         phys state:      5: LinkUp
>> >>         rate:            56 Gb/sec (4X FDR)
>> >>         link_layer:      InfiniBand
>> >>
>> >> Ron
>> >> --
>> >>
>> >> Professor Dr. Ronald Cohen
>> >> Ludwig Maximilians Universität
>> >> Theresienstrasse 41 Room 207
>> >> Department für Geo- und Umweltwissenschaften
>> >> München
>> >> 80333
>> >> Deutschland
>> >>
>> >>
>> >> ronald.co...@min.uni-muenchen.de
>> >> skype: ronaldcohen
>> >> +49 (0) 89 74567980
>> >> ---
>> >> Ronald Cohen
>> >> Geophysical Laboratory
>> >> Carnegie Institution
>> >> 5251 Broad Branch Rd., N.W.
>> >> Washington, D.C. 20015
>> >> rco...@carnegiescience.edu
>> >> office: 202-478-8937
>> >> skype: ronaldcohen
>> >> https://twitter.com/recohen3
>> >> https://www.linkedin.com/profile/view?id=163327727
>> >>
>> >>
>> >> ---
>> >> Ron Cohen
>> >> recoh...@gmail.com
>> >> skypename: ronaldcohen
>> >> twitter: @recohen3
>> >> _______________________________________________
>> >> users mailing list
>> >> us...@open-mpi.org
>> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >> Link to this post:
>> >> http://www.open-mpi.org/community/lists/users/2016/03/28791.php
>> >
>> >
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org
>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> > Link to this post:
>> > http://www.open-mpi.org/community/lists/users/2016/03/28793.php
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/03/28794.php
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28795.php

Reply via email to