Hi Marcin
Looking again at this: could you get a similar reservation again and rerun
mpirun with “-display-allocation” added to the command line? I’d like to see if
we are correctly parsing the number of slots assigned in the allocation
Ralph
> On Oct 6, 2015, at 11:52 AM, marcin.krotkiewski
If you get a chance, you might test this patch:
https://github.com/open-mpi/ompi-release/pull/656
I think it will resolve the problem you mentioned, and is small enough to go
into 1.10.1
Ralph
> On Oct 8, 2015, at 12:36 PM, marcin.krotkiewski
> wrote:
>
> Sorry, I think I confused one thin
Sorry, I think I confused one thing:
On 10/08/2015 09:15 PM, marcin.krotkiewski wrote:
For version 1.10.1rc1 and up the situation is a bit different: it
seems that in many cases all cores are present in the cpuset, just
that the binding does not take place in a lot of cases. Instead,
process
I agree that makes sense. I’ve been somewhat limited in my ability to work on
this lately, and I think Gilles has been in a similar situation. I’ll try to
create a 1.10 patch later today. Depending how minimal I can make it, we may
still be able to put it into 1.10.1, though the window on that i
Dear Ralph, Gilles, and Jeff
Thanks a lot for your effort.. Understanding this problem has been a
very interesting exercise for me that let me understand OpenMPI much
better (I think:).
I have given it all a little more thought, and done some more tests on
our production system, and I think
I don't want to pester everyone with all of our release candidates for 1.10.1,
so this will likely be the last general announcement on the users and
announcement lists before we release 1.10.1 (final).
That being said, 1.10.1 rc2 is now available:
http://www.open-mpi.org/software/ompi/v1.10
iirc, MPI_Comm_spawn should be used to spawn MPI apps only.
and depending on your interconnect, fork might not be supported from an MPI
app.
that being said, I am not sure MPI is the best way to go.
you might want to use the batch manager api to execute task on remote
nodes, or third party tools s
2015-10-08 12:09 GMT+02:00 simona bellavista :
>
>
> 2015-10-07 14:59 GMT+02:00 Lisandro Dalcin :
>
>> On 7 October 2015 at 14:54, simona bellavista wrote:
>> > I have written a small code in python 2.7 for launching 4 independent
>> > processes on the shell viasubprocess, using the library mpi4p
2015-10-07 14:59 GMT+02:00 Lisandro Dalcin :
> On 7 October 2015 at 14:54, simona bellavista wrote:
> > I have written a small code in python 2.7 for launching 4 independent
> > processes on the shell viasubprocess, using the library mpi4py. I am
> getting
> > ORTE_ERROR_LOG and I would like to u
Hi!
The attached code shows a problem when using mmap:ed buffer with
MPI_Send and vader btl.
With OMPI_MCA_btl='^vader' it works in all cases i have tested.
Intel MPI also have problems with this, failing to receive the complete
data, getting a NULL at position 6116 when the receiver is on
Hi, Gilles,
I have briefly tested your patch with master. So far everything works. I
must say what I really like about this version is that it with
--report-bindings it actually shows how the heterogeneous architectures
looks like, i.e., varying number of cores/sockets per compute node. This
11 matches
Mail list logo