Kewl! Let us know if it breaks again.
> On Apr 26, 2015, at 4:29 PM, Andy Riebs <andy.ri...@hp.com> wrote:
>
> Yes, it just worked -- I took the old command line, just to ensure that I was
> testing the correct problem, and it worked. Then I remembered that I had set
> OMPI_MCA_plm_rsh_pass_path and OMPI_MCA_plm_rsh_pass_libpath in my test
> setup, so I removed those from my environment, ran again, and it still worked!
>
> Whatever it is that you're doing Ralph, keep it up :-)
>
> Regardless of the cause or result, thanks $$$$$$ for poking at this!
>
> Andy
>
> On 04/26/2015 10:35 AM, Ralph Castain wrote:
>> Not intentionally - I did add that new MCA param as we discussed, but don’t
>> recall making any other changes in this area.
>>
>> There have been some other build system changes made as a result of more
>> extensive testing of the 1.8 release candidate - it is possible that
>> something in that area had an impact here.
>>
>> Are you saying it just works, even without passing the new param?
>>
>>
>>> On Apr 26, 2015, at 6:39 AM, Andy Riebs <andy.ri...@hp.com
>>> <mailto:andy.ri...@hp.com>> wrote:
>>>
>>> Hi Ralph,
>>>
>>> Did you solve this problem in a more general way? I finally sat down this
>>> morning to try this with the openmpi-dev-1567-g11e8c20.tar.bz2 nightly kit
>>> from last week, and can't reproduce the problem at all.
>>>
>>> Andy
>>>
>>> On 04/16/2015 12:15 PM, Ralph Castain wrote:
>>>> Sorry - I had to revert the commit due to a reported MTT problem. I'll
>>>> reinsert it after I get home and can debug the problem this weekend.
>>>>
>>>> On Thu, Apr 16, 2015 at 9:41 AM, Andy Riebs <andy.ri...@hp.com
>>>> <mailto:andy.ri...@hp.com>> wrote:
>>>> Hi Ralph,
>>>>
>>>> If I did this right (NEVER a good bet :-) ), it didn't work...
>>>>
>>>> Using last night's master nightly, openmpi-dev-1515-gc869490.tar.bz2, I
>>>> built with the same script as yesterday, but removing the LDFLAGS=-Wl,
>>>> stuff:
>>>>
>>>> $ ./configure --prefix=/home/ariebs/mic/mpi-nightly CC="icc -mmic"
>>>> CXX="icpc -mmic" \
>>>> --build=x86_64-unknown-linux-gnu --host=x86_64-k1om-linux \
>>>> AR=x86_64-k1om-linux-ar RANLIB=x86_64-k1om-linux-ranlib
>>>> LD=x86_64-k1om-linux-ld \
>>>> --enable-mpirun-prefix-by-default --disable-io-romio
>>>> --disable-mpi-fortran \
>>>> --enable-debug
>>>> --enable-mca-no-build=btl-usnic,btl-openib,common-verbs,oob-ud
>>>> $ make
>>>> $ make install
>>>> ...
>>>> make[1]: Leaving directory
>>>> `/home/ariebs/mic/openmpi-dev-1515-gc869490/test'
>>>> make[1]: Entering directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> make[2]: Entering directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> make install-exec-hook
>>>> make[3]: Entering directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> make[3]: ./config/find_common_syms: Command not found
>>>> make[3]: [install-exec-hook] Error 127 (ignored)
>>>> make[3]: Leaving directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> make[2]: Nothing to be done for `install-data-am'.
>>>> make[2]: Leaving directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> make[1]: Leaving directory `/home/ariebs/mic/openmpi-dev-1515-gc869490'
>>>> $
>>>>
>>>> But it seems to finish the install.
>>>>
>>>> I then tried to run, adding the new mca arguments:
>>>>
>>>> $ shmemrun -x SHMEM_SYMMETRIC_HEAP_SIZE=1M -mca plm_rsh_pass_path $PATH
>>>> -mca plm_rsh_pass_libpath $MIC_LD_LIBRARY_PATH -H mic0,mic1 -n 2 ./mic.out
>>>> /home/ariebs/mic/mpi-nightly/bin/orted: error while loading shared
>>>> libraries: libimf.so: cannot open shared object file: No such file or
>>>> directory
>>>> ...
>>>> $ echo $MIC_LD_LIBRARY_PATH
>>>> /opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic:/opt/intel/15.0/composer_xe_2015.2.164/mpirt/lib/mic:/opt/intel/mic/coi/device-linux-release/lib:/opt/intel/mic/myo/lib:/opt/intel/15.0/composer_xe_2015.2.164/ipp/lib/lib/mic:/opt/intel/mic/coi/device-linux-release/lib:/opt/intel/mic/myo/lib:/opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic:/opt/intel/15.0/composer_xe_2015.2.164/mkl/lib/mic:/opt/intel/15.0/composer_xe_2015.2.164/tbb/lib/mic
>>>> $ ls /opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic/libimf.*
>>>> /opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic/libimf.a
>>>> /opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic/libimf.so
>>>> $
>>>>
>>>>
>>>>
>>>> On 04/16/2015 07:22 AM, Ralph Castain wrote:
>>>>> FWIW: I just added (last night) a pair of new MCA params for this purpose:
>>>>>
>>>>> plm_rsh_pass_path <foo> prepends the designated path to the remote
>>>>> shell's PATH prior to executing orted
>>>>> plm_rsh_pass_libpath <foo> same thing for LD_LIBRARY_PATH
>>>>>
>>>>> I believe that will resolve the problem for Andy regardless of compiler
>>>>> used. In the master now, waiting for someone to verify it before adding
>>>>> to 1.8.5. Sadly, I am away from any cluster for the rest of this week, so
>>>>> I'd welcome anyone having a chance to test it.
>>>>>
>>>>>
>>>>> On Thu, Apr 16, 2015 at 2:57 AM, Thomas Jahns <ja...@dkrz.de
>>>>> <mailto:ja...@dkrz.de>> wrote:
>>>>> Hello,
>>>>>
>>>>> On Apr 15, 2015, at 02:11 , Gilles Gouaillardet wrote:
>>>>>> what about reconfiguring Open MPI with
>>>>>> LDFLAGS="-Wl,-rpath,/opt/intel/15.0/composer_xe_2015.2.164/compiler/lib/mic"
>>>>>> ?
>>>>>>
>>>>>> IIRC, an other option is : LDFLAGS="-static-intel"
>>>>>
>>>>>
>>>>> let me first state that I have no experience developing for MIC. But
>>>>> regarding the Intel runtime libraries, the only sane option in my opinion
>>>>> is to use the icc.cfg/ifort.cfg/icpc.cfg files that get put in the same
>>>>> directory as the corresponding compiler binaries and add a line like
>>>>>
>>>>> -Wl,-rpath,/path/to/composerxe/lib/intel??
>>>>>
>>>>> to that file.
>>>>>
>>>>> Regards, Thomas
>>>>> --
>>>>> Thomas Jahns
>>>>> DKRZ GmbH, Department: Application software
>>>>>
>>>>> Deutsches Klimarechenzentrum
>>>>> Bundesstraße 45a
>>>>> D-20146 Hamburg
>>>>>
>>>>> Phone: +49-40-460094-151 <tel:%2B49-40-460094-151>
>>>>> Fax: +49-40-460094-270 <tel:%2B49-40-460094-270>
>>>>> Email: Thomas Jahns <ja...@dkrz.de <mailto:ja...@dkrz.de>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2015/04/26745.php
>>>>> <http://www.open-mpi.org/community/lists/users/2015/04/26745.php>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2015/04/26746.php
>>>>> <http://www.open-mpi.org/community/lists/users/2015/04/26746.php>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/04/26748.php
>>>> <http://www.open-mpi.org/community/lists/users/2015/04/26748.php>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/04/26749.php
>>>> <http://www.open-mpi.org/community/lists/users/2015/04/26749.php>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2015/04/26789.php
>>> <http://www.open-mpi.org/community/lists/users/2015/04/26789.php>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/04/26790.php
>> <http://www.open-mpi.org/community/lists/users/2015/04/26790.php>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26792.php