Thanks, Dave.
I have verified the memory locality and IB card locality, all's fine.
Quite accidentally I have found that there is a huge penalty if I mmap
the shm with PROT_READ only. Using PROT_READ | PROT_WRITE yields good
results, although I must look at this further. I'll report when I am
Dear Jeff, Dear all,
the code is very long, here something. I hope that this could help.
What do you think?
SUBROUTINE MATOPQN
USE VARS_COMMON,ONLY:COMM_CART,send_messageR,recv_messageL,nMsg
USE MPI
INTEGER :: send_request(nMsg), recv_request(nMsg)
INTEGER ::
send_status_list(MPI_STATUS_SIZE,nMsg
dear Jeff, dear all,
I have notice that if I initialize the variables, I do not have the error
anymore:
!
ALLOCATE(SEND_REQUEST(nMsg),RECV_REQUEST(nMsg))
SEND_REQUEST=0
RECV_REQUEST=0
!
Could you please explain me why?
Thanks
Diego
On 29 September 2015 at 16:08, Diego Avesani
wrote:
>
This code does not appear to compile -- there's no main program, for example.
Can you make a small, self-contained example program that shows the problem?
> On Sep 29, 2015, at 10:08 AM, Diego Avesani wrote:
>
> Dear Jeff, Dear all,
> the code is very long, here something. I hope that this cou
ok,
let me try
Diego
On 29 September 2015 at 16:23, Jeff Squyres (jsquyres)
wrote:
> This code does not appear to compile -- there's no main program, for
> example.
>
> Can you make a small, self-contained example program that shows the
> problem?
>
>
> > On Sep 29, 2015, at 10:08 AM, Diego Av
Diego,
if you invoke MPI_Waitall on three requests, and some of them have not been
initialized
(manually, or via MPI_Isend or MPI_Irecv), then the behavior of your
program is undetermined.
if you want to use array of requests (because it make the program simple)
but you know not all of them are a
We register the memory with the NIC for both read and write access. This
may be the source of the slowdown. We recently added internal support to
allow the point-to-point layer to specify the access flags but the
openib btl does not yet make use of the new support. I plan to make the
necessary cha
I've just compared IB p2p latency between version 1.6.5 and 1.8.8. I'm
surprised to find that 1.8 is rather worse, as below. Assuming that's
not expected, are there any suggestions for debugging it?
This is with FDR Mellanox, between two Sandybridge nodes on the same
blade chassis switch. The r
[Meanwhile, much later, as I thought I'd sent this...]
Ralph Castain writes:
> Hi Zhang
>
> We have seen little interest in binary level CR over the years, which
> is the primary reason the support has lapsed.
That might be a bit chicken and egg!
> The approach just doesn’t scale very well.
P
unfortunately, there is no one size fits all here.
mxm provides best performance for IB.
different application may require different OMPI, mxm, OS tunables and
requires some performance engineering.
On Mon, Sep 28, 2015 at 9:49 PM, Grigory Shamov wrote:
> Hi Nathan,
> Hi Mike,
>
> Thanks for t
what is your command line and setup? (ofed version, distro)
This is what was just measured w/ fdr on haswell with v1.8.8 and mxm and UD
+ mpirun -np 2 -bind-to core -display-map -mca rmaps_base_mapping_policy
dist:span -x MXM_RDMA_PORTS=mlx5_3:1 -mca rmaps_dist_device mlx5_3:1 -x
MXM_TLS=self,sh
I've now run a few more tests and I think I can reasonably confidently
say that the read only mmap is a problem. Let me know if you have a
possible fix - I will gladly test it.
Marcin
On 09/29/2015 04:59 PM, Nathan Hjelm wrote:
We register the memory with the NIC for both read and write ac
I have a branch with the changes available at:
https://github.com/hjelmn/ompi.git
in the mpool_update branch. If you prefer you can apply this patch to
either a 2.x or a master tarball.
https://github.com/hjelmn/ompi/commit/8839dbfae85ba8f443b2857f9bbefdc36c4ebc1a.patch
Let me know if this res
There was a bug in that patch that affected IB systems. Updated patch:
https://github.com/hjelmn/ompi/commit/c53df23c0bcf8d1c531e04d22b96c8c19f9b3fd1.patch
-Nathan
On Tue, Sep 29, 2015 at 03:35:21PM -0600, Nathan Hjelm wrote:
>
> I have a branch with the changes available at:
>
> https://gith
14 matches
Mail list logo