My openmpi3.0.x run (called nscf run) was reading data from a routine Quantum 
Espresso input file edited by hand. The preliminary run (called scf run) was 
done with openmpi3.0.x on a similar input file also edited by hand. 

Vahid



> On Jan 18, 2018, at 6:39 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> 
> wrote:
> 
> FWIW: If your Open MPI 3.0.x runs are reading data that was written by MPI IO 
> via Open MPI 1.10.x or 1.8.x runs, that data formats may not be compatible 
> (and could lead to errors like you're seeing -- premature end of file, etc.).
> 
> 
>> On Jan 18, 2018, at 5:34 PM, Vahid Askarpour <vh261...@dal.ca> wrote:
>> 
>> Hi Jeff,
>> 
>> I compiled Quantum Espresso/EPW with openmpi-3.0.x. The openmpi was compiled 
>> with intel14.
>> 
>> A preliminary run for EPW using Quantum Espresso crashed with the following 
>> message:
>> 
>> end of file while reading crystal k points
>> 
>> There are 1728 k points in the input file and Quantum Espresso, by default, 
>> can read up to 40000 k points.
>> 
>> This error did not occur with openmpi-1.8.1.
>> 
>> So I will just continue to use openmpi-1.8.1 as it does not crash.
>> 
>> Thanks,
>> 
>> Vahid
>> 
>>> On Jan 11, 2018, at 12:50 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> 
>>> wrote:
>>> 
>>> You are correct: 3.0.1 has not been released yet.
>>> 
>>> However, our nightly snapshots of the 3.0.x branch are available for 
>>> download.  These are not official releases, but they are great for getting 
>>> users to test what will eventually become an official release (i.e., 3.0.1) 
>>> to see if particular bugs have been fixed.  This is one of the benefits of 
>>> open source.  :-)
>>> 
>>> Here's where the 3.0.1 nightly snapshots are available for download:
>>> 
>>>  https://www.open-mpi.org/nightly/v3.0.x/
>>> 
>>> They are organized by date.
>>> 
>>> 
>>>> On Jan 11, 2018, at 11:34 AM, Vahid Askarpour <vh261...@dal.ca> wrote:
>>>> 
>>>> Hi Jeff,
>>>> 
>>>> I looked for the 3.0.1 version but I only found the 3.0.0 version 
>>>> available for download. So I thought it may take a while for the 3.0.1 to 
>>>> become available. Or did I miss something?
>>>> 
>>>> Thanks,
>>>> 
>>>> Vahid
>>>> 
>>>>> On Jan 11, 2018, at 12:04 PM, Jeff Squyres (jsquyres) 
>>>>> <jsquy...@cisco.com> wrote:
>>>>> 
>>>>> Vahid --
>>>>> 
>>>>> Were you able to give it a whirl?
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> 
>>>>>> On Jan 5, 2018, at 7:58 PM, Vahid Askarpour <vh261...@dal.ca> wrote:
>>>>>> 
>>>>>> Gilles,
>>>>>> 
>>>>>> I will try the 3.0.1rc1 version to see how it goes.
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Vahid
>>>>>> 
>>>>>>> On Jan 5, 2018, at 8:40 PM, Gilles Gouaillardet 
>>>>>>> <gilles.gouaillar...@gmail.com> wrote:
>>>>>>> 
>>>>>>> Vahid,
>>>>>>> 
>>>>>>> This looks like the description of the issue reported at 
>>>>>>> https://github.com/open-mpi/ompi/issues/4336
>>>>>>> The fix is currently available in 3.0.1rc1, and I will back port the 
>>>>>>> fix fo the v2.x branch.
>>>>>>> A workaround is to use ROM-IO instead of ompio, you can achieve this 
>>>>>>> with
>>>>>>> mpirun —mca io ^ompio ...
>>>>>>> (FWIW 1.10 series use ROM-IO by default, so there is no leak out of the 
>>>>>>> box)
>>>>>>> 
>>>>>>> IIRC, a possible (and ugly) workaround for the compilation issue is to
>>>>>>> configure —with-ucx=/usr ...
>>>>>>> That being said, you should really upgrade to a supported version of 
>>>>>>> Open MPI as previously suggested
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> 
>>>>>>> Gilles
>>>>>>> 
>>>>>>> On Saturday, January 6, 2018, Jeff Squyres (jsquyres) 
>>>>>>> <jsquy...@cisco.com> wrote:
>>>>>>> You can still give Open MPI 2.1.1 a try.  It should be source 
>>>>>>> compatible with EPW.  Hopefully the behavior is close enough that it 
>>>>>>> should work.
>>>>>>> 
>>>>>>> If not, please encourage the EPW developers to upgrade.  v3.0.x is the 
>>>>>>> current stable series; v1.10.x is ancient.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>>> On Jan 5, 2018, at 5:22 PM, Vahid Askarpour <vh261...@dal.ca> wrote:
>>>>>>>> 
>>>>>>>> Thank you Jeff for your suggestion to use the v.2.1 series.
>>>>>>>> 
>>>>>>>> I am attempting to use openmpi with EPW. On the EPW website 
>>>>>>>> (http://epw.org.uk/Main/DownloadAndInstall), it is stated that:
>>>>>>>> 
>>>>>>>>> Compatibility of EPW
>>>>>>>>> 
>>>>>>>>> EPW is tested and should work on the following compilers and 
>>>>>>>>> libraries:
>>>>>>>>> 
>>>>>>>>> • gcc640 serial
>>>>>>>>> • gcc640 + openmpi-1.10.7
>>>>>>>>> • intel 12 + openmpi-1.10.7
>>>>>>>>> • intel 17 + impi
>>>>>>>>> • PGI 17 + mvapich2.3
>>>>>>>>> EPW is know to have the following incompatibilities with:
>>>>>>>>> 
>>>>>>>>> • openmpi 2.0.2 (but likely on all the 2.x.x version): Works but 
>>>>>>>>> memory leak. If you open and close a file a lot of times with openmpi 
>>>>>>>>> 2.0.2, the memory increase linearly with the number of times the file 
>>>>>>>>> is open.
>>>>>>>> 
>>>>>>>> So I am hoping to avoid the 2.x.x series and use the 1.10.7 version 
>>>>>>>> suggested by the EPW developers. However, it appears that this is not 
>>>>>>>> possible.
>>>>>>>> 
>>>>>>>> Vahid
>>>>>>>> 
>>>>>>>>> On Jan 5, 2018, at 5:06 PM, Jeff Squyres (jsquyres) 
>>>>>>>>> <jsquy...@cisco.com> wrote:
>>>>>>>>> 
>>>>>>>>> I forget what the underlying issue was, but this issue just came up 
>>>>>>>>> and was recently fixed:
>>>>>>>>> 
>>>>>>>>> https://github.com/open-mpi/ompi/issues/4345
>>>>>>>>> 
>>>>>>>>> However, the v1.10 series is fairly ancient -- the fix was not 
>>>>>>>>> applied to that series.  The fix was applied to the v2.1.x series, 
>>>>>>>>> and a snapshot tarball containing the fix is available here 
>>>>>>>>> (generally just take the latest tarball):
>>>>>>>>> 
>>>>>>>>> https://www.open-mpi.org/nightly/v2.x/
>>>>>>>>> 
>>>>>>>>> The fix is still pending for the v3.0.x and v3.1.x series (i.e., 
>>>>>>>>> there are pending pull requests that haven't been merged yet, so the 
>>>>>>>>> nightly snapshots for the v3.0.x and v3.1.x branches do not yet 
>>>>>>>>> contain this fix).
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> On Jan 5, 2018, at 1:34 PM, Vahid Askarpour <vh261...@dal.ca> wrote:
>>>>>>>>>> 
>>>>>>>>>> I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) 
>>>>>>>>>> using GCC-6.4.0.
>>>>>>>>>> 
>>>>>>>>>> When compiling, I get the following error:
>>>>>>>>>> 
>>>>>>>>>> make[2]: Leaving directory 
>>>>>>>>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
>>>>>>>>>> Making all in mca/pml/ucx
>>>>>>>>>> make[2]: Entering directory 
>>>>>>>>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>>>>>>>>> CC       pml_ucx.lo
>>>>>>>>>> CC       pml_ucx_request.lo
>>>>>>>>>> CC       pml_ucx_datatype.lo
>>>>>>>>>> CC       pml_ucx_component.lo
>>>>>>>>>> CCLD     mca_pml_ucx.la
>>>>>>>>>> libtool:   error: require no space between '-L' and '-lrt'
>>>>>>>>>> make[2]: *** [Makefile:1725: mca_pml_ucx.la] Error 1
>>>>>>>>>> make[2]: Leaving directory 
>>>>>>>>>> '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ucx'
>>>>>>>>>> make[1]: *** [Makefile:3261: all-recursive] Error 1
>>>>>>>>>> make[1]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi'
>>>>>>>>>> make: *** [Makefile:1777: all-recursive] Error 1
>>>>>>>>>> 
>>>>>>>>>> Thank you,
>>>>>>>>>> 
>>>>>>>>>> Vahid
>>>>>>>>>> _______________________________________________
>>>>>>>>>> users mailing list
>>>>>>>>>> users@lists.open-mpi.org
>>>>>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> --
>>>>>>>>> Jeff Squyres
>>>>>>>>> jsquy...@cisco.com
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> _______________________________________________
>>>>>>>>> users mailing list
>>>>>>>>> users@lists.open-mpi.org
>>>>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>>>> 
>>>>>>>> _______________________________________________
>>>>>>>> users mailing list
>>>>>>>> users@lists.open-mpi.org
>>>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Jeff Squyres
>>>>>>> jsquy...@cisco.com
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users@lists.open-mpi.org
>>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> users@lists.open-mpi.org
>>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>>> 
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> users@lists.open-mpi.org
>>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>>> 
>>>>> 
>>>>> -- 
>>>>> Jeff Squyres
>>>>> jsquy...@cisco.com
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> users@lists.open-mpi.org
>>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>> 
>>>> _______________________________________________
>>>> users mailing list
>>>> users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>>> 
>>> -- 
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> 
> 
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to