Hi all,
I'm trying to run a code on 2 machines that has at least 2 network
interfaces in it.
So I have them as described below:
compute01
compute02
ens3
192.168.100.104/24
10.0.0.227/24
ens8
10.0.0.228/24
172.21.1.128/24
ens9
172.21.1.155/24
---
Issue is. When I execute `mpirun -n 2 -
Hi!
Thank you all for your reply Jeff, Gilles and rhc.
Thank you Jeff and rhc for clarifying to me some of the openmpi's internals.
>> FWIW: we never send interface names to other hosts - just dot addresses
> Should have clarified - when you specify an interface name for the MCA
param, then it i
dot addresses
>
>
> Should have clarified - when you specify an interface name for the MCA
> param, then it is the interface name that is transferred as that is the
> value of the MCA param. However, once we determine our address, we only
> transfer dot addresses between ourselves
&
Just realized my email wasn't sent to the archive.
On Sat, Jun 23, 2018 at 5:34 PM, carlos aguni wrote:
> Hi!
>
> Thank you all for your reply Jeff, Gilles and rhc.
>
> Thank you Jeff and rhc for clarifying to me some of the openmpi's
> internals.
>
> >>
t;
> once we can analyze the logs, we should be able to figure out what is
> going wrong.
>
>
> Cheers,
>
> Gilles
>
> On 6/29/2018 4:10 AM, carlos aguni wrote:
>
>> Just realized my email wasn't sent to the archive.
>>
>> On Sat, Jun 23, 2018 a
Hi all.
I've an MPI application in which at one moment one rank receives a slice of
an array from the other nodes.
Thing is that my application hangs there.
One thing I could get from printint out logs are:
(Rank 0) Starts MPI_Recv from source 4
But then it receives:
MPI_Send from 0
MPI_Send from
Not "MPI_Send from 0"..
MPI_Send from 1 to 0
MPI_Send from 7 to 0
And so on..
On Wed, Mar 27, 2019, 8:43 AM carlos aguni wrote:
> Hi all.
>
> I've an MPI application in which at one moment one rank receives a slice
> of an array from the other nodes.
> Thing is t
gt; Carlos,
>
> can you post a trimmed version of your code that evidences the issue ?
>
> Keep in mind that if you want to write MPI code that is correct with
> respect to the standard, you should assume MPI_Send() might block until a
> matching receive is posted.
>
>
Hi all,
I've got a code where I MPI_Isend at a time and later I get the result from
MPI_Test flag to see whether it has completed or not.
So the code is like:
MPI_Isend()
... some stuff..
flag = 0;
MPI_Test(req, &flag, &status);
if (flag){
free(buffer);
}
After the free() i'm getting errors
sing around 2GB which I'm guessing it isn't
freeing it.
Is there anything I could try?
Regards,
C.
On Mon, Jul 22, 2019 at 10:59 AM Jeff Squyres (jsquyres)
wrote:
> > On Jul 21, 2019, at 11:31 AM, carlos aguni via users <
> users@lists.open-mpi.org> wrote:
> >
>
Hi all,
Sorry no reply.
I just figured out the solution.
The problem was that I had a function that would MPI_Isend a message on
every call to it. Then I'd store its request pointer to a list.
My MPI_Isend snippet:
MPI_Request req;
MPI_Isend(blabla, &req)
task_push(&req);
>From time to time at
Hi all,
I'm trying to MPI_Spawn processes with no success.
I'm facing the following error:
=
All nodes which are allocated for this job are already filled.
==
I'm setting the hostname as follows:
MPI_Info_set(minfo, "host", hostname);
I'm already running with `--o
OG: Unreachable in file
dpm/dpm.c at line 433
Other than this the error is always:
--
All nodes which are allocated for this job are already filled.
----------
O
9, 2020 at 9:17 AM Martín via users
wrote:
> Hi Carlos, could you try ompi 4.0.1?
> Regards.
>
> Martín
>
> El 29 abr. 2020 02:20, carlos aguni via users
> escribió:
>
> Hi all,
>
> I'm trying to MPI_Spawn processes with no success.
> I'm facing th
14 matches
Mail list logo