Eric,

these warnings are not important and you can simply ignore them.
fwiw, this is a race condition evidenced by recent "asynchrousity".

i will push a fix tomorrow.

in the mean time, you can
mpirun --mca oob ^tcp ...
(if you run on one node only)
or
mpirun --mca oob ^usock
(if you have an OS X cluster ...)

Cheers,

Gilles

On Sunday, July 26, 2015, Erik Schnetter <schnet...@gmail.com> wrote:

> Mark
>
> No, it doesn't need to be 1.8.7.
>
> I just tried v2.x-dev-96-g918650a. This leads to run-time warnings on OS
> X; I see messages such as
>
> [warn] select: Bad file descriptor
>
> Are these important? If not, how can I suppress them?
>
> -erik
>
>
> On Sat, Jul 25, 2015 at 7:49 AM, Mark Santcroos <
> mark.santcr...@rutgers.edu
> <javascript:_e(%7B%7D,'cvml','mark.santcr...@rutgers.edu');>> wrote:
>
>> Hi Erik,
>>
>> Do you really want 1.8.7, otherwise you might want to give latest master
>> a try. Other including myself had more luck with that on Cray's, including
>> Edison.
>>
>> Mark
>>
>> > On 25 Jul 2015, at 1:35 , Erik Schnetter <schnet...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','schnet...@gmail.com');>> wrote:
>> >
>> > I want to build OpenMPI 1.8.7 on a Cray XC30 (Edison at NERSC). I've
>> tried various configuration options, but I am always encountering either
>> OpenMPI build errors, application build errors, or run-time errors.
>> >
>> > I'm currently looking at <
>> http://www.open-mpi.org/community/lists/users/2015/06/27230.php>, which
>> seems to describe my case. I'm now configuring OpenMPI without any options,
>> except setting compilers to clang/gfortran and pointing it to a self-built
>> hwloc. For completeness, here are my configure options as recorded by
>> config.status:
>> >
>> >
>> '/project/projectdirs/m152/schnette/edison/software/src/openmpi-1.8.7/src/openmpi-1.8.7/configure'
>> '--prefix=/project/projectdirs/m152/schnette/edison/software/openmpi-1.8.7'
>> '--with-hwloc=/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0'
>> '--disable-vt'
>> 'CC=/project/projectdirs/m152/schnette/edison/software/llvm-3.6.2/bin/wrap-clang'
>> 'CXX=/project/projectdirs/m152/schnette/edison/software/llvm-3.6.2/bin/wrap-clang++'
>> 'FC=/project/projectdirs/m152/schnette/edison/software/gcc-5.2.0/bin/wrap-gfortran'
>> 'CFLAGS=-I/opt/ofed/include
>> -I/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0/include'
>> 'CXXFLAGS=-I/opt/ofed/include
>> -I/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0/include'
>> 'LDFLAGS=-L/opt/ofed/lib64
>> -L/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0/lib
>> -Wl,-rpath,/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0/lib'
>> 'LIBS=-lhwloc -lpthread -lpthread'
>> '--with-wrapper-ldflags=-L/project/projectdirs/
>>  m152/schnette/edison/software/hwloc-1.11.0/lib
>> -Wl,-rpath,/project/projectdirs/m152/schnette/edison/software/hwloc-1.11.0/lib'
>> '--with-wrapper-libs=-lhwloc -lpthread'
>> >
>> > This builds and installs fine, and works when running on a single node.
>> However, multi-node runs are stalling: The queue starts the job, but mpirun
>> produces no output. The "-v" option to mpirun doesn't help.
>> >
>> > When I use aprun instead of mpirun to start my application, then all
>> processes think they are rank 0.
>> >
>> > Do you have any pointers for how to debug this?
>> >
>> > -erik
>> >
>> > --
>> > Erik Schnetter <schnet...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','schnet...@gmail.com');>>
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>> > _______________________________________________
>> > users mailing list
>> > us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> > Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/07/27324.php
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/07/27327.php
>>
>
>
>
> --
> Erik Schnetter <schnet...@gmail.com
> <javascript:_e(%7B%7D,'cvml','schnet...@gmail.com');>>
> http://www.perimeterinstitute.ca/personal/eschnetter/
>

Reply via email to