Attached is the output of ompi_info --all .

Note that the message :
        Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
limitations in the gfortran compiler, does not support the following:
array subsections, direct passthru (where possible) to underlying Open
MPI's C functionality
is not correct anymore--gfortran 6.0.0 now includes array subsections
Not sure about direct passthru.

Ron
---
Ron Cohen
recoh...@gmail.com
skypename: ronaldcohen
twitter: @recohen3


On Wed, Mar 23, 2016 at 7:54 AM, Ronald Cohen <recoh...@gmail.com> wrote:
> I get 100 GFLOPS for 16 cores on one node, but 1 GFLOP running 8 cores
> on two nodes. It seems that quad-infiniband should do better than
> this. I built openmpi-1.10.2g with gcc version 6.0.0 20160317 . Any
> ideas of what to do to get usable performance? Thank you!
>
> bstatus
> Infiniband device 'mlx4_0' port 1 status:
>         default gid:     fe80:0000:0000:0000:0002:c903:00ec:9301
>         base lid:        0x1
>         sm lid:          0x1
>         state:           4: ACTIVE
>         phys state:      5: LinkUp
>         rate:            56 Gb/sec (4X FDR)
>         link_layer:      InfiniBand
>
> Ron
> --
>
> Professor Dr. Ronald Cohen
> Ludwig Maximilians Universität
> Theresienstrasse 41 Room 207
> Department für Geo- und Umweltwissenschaften
> München
> 80333
> Deutschland
>
>
> ronald.co...@min.uni-muenchen.de
> skype: ronaldcohen
> +49 (0) 89 74567980
> ---
> Ronald Cohen
> Geophysical Laboratory
> Carnegie Institution
> 5251 Broad Branch Rd., N.W.
> Washington, D.C. 20015
> rco...@carnegiescience.edu
> office: 202-478-8937
> skype: ronaldcohen
> https://twitter.com/recohen3
> https://www.linkedin.com/profile/view?id=163327727
>
>
> ---
> Ron Cohen
> recoh...@gmail.com
> skypename: ronaldcohen
> twitter: @recohen3

Attachment: ompi_info.out
Description: Binary data

Reply via email to