I went and built Intel MPI Benchmarks 2018 as well as OMB 5.4.1 w/ Intel 18
compiler suite + OpenMPI 2.1.3; I built similar but purely with the MPI that
comes with Intel 18.
What I found is that for both benchmark suites, they hang when at a scale of at
least 640 ranks, and in particular with t
Am 13.04.2018 um 15:41 schrieb Nathan Hjelm:
> Err. MPI_Comm_remote_size.
Ah, thanks! I thought at that MPI_Comm_size returns the number of remote ranks.
remote_size returns 1, so now at least
the error message is consistent.
Best,
Florian
>
>> On Apr 13, 2018, at 7:41 AM, Nathan Hjelm wrote:
Err. MPI_Comm_remote_size.
> On Apr 13, 2018, at 7:41 AM, Nathan Hjelm wrote:
>
> Try using MPI_Comm_remotr_size. As this is an intercommunicator that will
> give the number of ranks for send/recv.
>
>> On Apr 13, 2018, at 7:34 AM, Florian Lindner wrote:
>>
>> Hello,
>>
>> I have this piec
Try using MPI_Comm_remotr_size. As this is an intercommunicator that will give
the number of ranks for send/recv.
> On Apr 13, 2018, at 7:34 AM, Florian Lindner wrote:
>
> Hello,
>
> I have this piece of code
>
> PtrRequest MPICommunication::aSend(double *itemsToSend, int size, int
> rankRe
Hello,
I have this piece of code
PtrRequest MPICommunication::aSend(double *itemsToSend, int size, int
rankReceiver)
{
rankReceiver = rankReceiver - _rankOffset;
int comsize = -1;
MPI_Comm_size(communicator(rankReceiver), &comsize);
TRACE(size, rank(rankReceiver), comsize);
MPI_Reque
Hi Gilles,
Thank you for your reply. I am using Openmpi 3.0.0.
I tried the command you recommend, and it works right now. Perhaps this bug
is still there?
On Thu, Apr 12, 2018 at 10:06 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Which version of Open MPI are you running ?
>