Ah, my mistake -- I read the warning message too quickly:
> mca_base_component_repository_open: unable to open mca_io_romio314:
> libgpfs.so: cannot open shared object file: No such file or directory
> (ignored)
This simply means that the mca_io_romio314 plugin failed to load because it was
un
No, I installed this version of openmpi once and with intel14.
Vahid
> On Jan 30, 2018, at 4:41 PM, Jeff Squyres (jsquyres)
> wrote:
>
> Did you install one version of Open MPI over another version?
>
>https://www.open-mpi.org/faq/?category=building#install-overwrite
>
>
>> On Jan 30, 2
Did you install one version of Open MPI over another version?
https://www.open-mpi.org/faq/?category=building#install-overwrite
> On Jan 30, 2018, at 2:09 PM, Vahid Askarpour wrote:
>
> This is just an update on how things turned out with openmpi-3.0.x.
>
> I compiled both EPW and openmpi
This is just an update on how things turned out with openmpi-3.0.x.
I compiled both EPW and openmpi with intel14. In the past, EPW crashed for both
intel16 and 17. However, with intel14 and openmpi/1.8.8 , I have been getting
results consistently.
The nscf.in worked with the -i argument. Howeve
Jeff,
i guess i was referring to
https://mail-archive.com/users@lists.open-mpi.org/msg29818.html
that was reported and fixed in August 2016, so the current issue is
likely unrelated.
Since you opened a github issue for that, let's all follow-up at
https://github.com/open-mpi/ompi/issues/
On Jan 23, 2018, at 8:33 AM, Gilles Gouaillardet
wrote:
>
> There used to be a bug in the IOF part, but I am pretty sure this has already
> been fixed.
Gilles: can you cite what you're talking about?
Edgar was testing on master, so if there was some kind of IOF fix, I would
assume that it wo
I ran all my tests with gcc 6.4
Thanks
Edgar
On 1/23/2018 7:40 AM, Vahid Askarpour wrote:
Gilles,
I have not tried compiling the latest openmpi with GCC. I am waiting
to see how the intel version turns out before attempting GCC.
Cheers,
Vahid
On Jan 23, 2018, at 9:33 AM, Gilles Gouailla
Fair enough,
To be on the safe side, I encourage you to use the latest Intel compilers
Cheers,
Gilles
Vahid Askarpour wrote:
>Gilles,
>
>
>I have not tried compiling the latest openmpi with GCC. I am waiting to see
>how the intel version turns out before attempting GCC.
>
>
>Cheers,
>
>
>Vah
Gilles,
I have not tried compiling the latest openmpi with GCC. I am waiting to see how
the intel version turns out before attempting GCC.
Cheers,
Vahid
On Jan 23, 2018, at 9:33 AM, Gilles Gouaillardet
mailto:gilles.gouaillar...@gmail.com>> wrote:
Vahid,
There used to be a bug in the IOF pa
Vahid,
There used to be a bug in the IOF part, but I am pretty sure this has already
been fixed.
Does the issue also occur with GNU compilers ?
There used to be an issue with Intel Fortran runtime (short read/write were
silently ignored) and that was also fixed some time ago.
Cheers,
Gilles
This would work for Quantum Espresso input. I am waiting to see what happens to
EPW. I don’t think EPW accepts the -i argument. I will report back once the EPW
job is done.
Cheers,
Vahid
On Jan 22, 2018, at 6:05 PM, Edgar Gabriel
mailto:egabr...@central.uh.edu>> wrote:
well, my final commen
well, my final comment on this topic, as somebody suggested earlier in
this email chain, if you provide the input with the -i argument instead
of piping from standard input, things seem to work as far as I can see
(disclaimer: I do not know what the final outcome should be. I just see
that the
after some further investigation, I am fairly confident that this is not
an MPI I/O problem.
The input file input_tmp.in is generated in this sequence of
instructions (which is in Modules/open_close_input_file.f90)
---
IF ( TRIM(input_file_) /= ' ' ) THEn
!
! copy file to be open
this is most likely a different issue. The bug in the original case is
appearing also on a local file system/disk, it doesn't have to be NSF.
That being said, I would urge to submit a new issue ( or a new email
thread), I would be more than happy to look into your problem as well,
since we sub
Not sure if this is related and I have not had time to investigate it
much or reduce but I am also having issues with 3.0.x. There's a couple
of layers of cgns and hdf5 but I am seeing:
mpirun --mca io romio314 --mca btl self,vader,openib...
-- works perfectly
mpirun --mca btl self,vader,openib.
Concerning the following error
from pw_readschemafile : error # 1
xml data file not found
The nscf run uses files generated by the scf.in run. So I first run scf.in and
when it finishes, I run nscf.in. If you have done this and still get the above
error, then this could be ano
ok, here is what found out so far, will have to stop for now however for
today:
1. I can in fact reproduce your bug on my systems.
2. I can confirm that the problem occurs both with romio314 and ompio.
I *think* the issue is that the input_tmp.in file is incomplete. In both
cases (ompio and
Hi Edgar,
Just to let you know that the nscf run with --mca io ompio crashed like the
other two runs.
Thank you,
Vahid
On Jan 19, 2018, at 12:46 PM, Edgar Gabriel
mailto:egabr...@central.uh.edu>> wrote:
ok, thank you for the information. Two short questions and requests. I have
qe-6.2.1 co
ok, thank you for the information. Two short questions and requests. I
have qe-6.2.1 compiled and running on my system (although it is with
gcc-6.4 instead of the intel compiler), and I am currently running the
parallel test suite. So far, all the tests passed, although it is still
running.
M
To run EPW, the command for running the preliminary nscf run is
(http://epw.org.uk/Documentation/B-dopedDiamond):
~/bin/openmpi-v3.0/bin/mpiexec -np 64
/home/vaskarpo/bin/qe-6.0_intel14_soc/bin/pw.x -npool 64 < nscf.in > nscf.out
So I submitted it with the following command:
~/bin/openmpi-v3.0
thanks, that is interesting. Since /scratch is a lustre file system,
Open MPI should actually utilize romio314 for that anyway, not ompio.
What I have seen however happen on at least one occasions is that ompio
was still used since ( I suspect) romio314 didn't pick up correctly the
configuratio
Askarpour
Sent: Friday, January 19, 2018 8:15 AM
To: Open MPI Users
Subject: Re: [OMPI users] Installation of openmpi-1.10.7 fails
Gilles,
I have submitted that job with --mca io romio314. If it finishes, I will let
you know. It is sitting in Conte’s queue at Purdue.
As to Edgar’s question about t
Gilles,
I have submitted that job with --mca io romio314. If it finishes, I will let
you know. It is sitting in Conte’s queue at Purdue.
As to Edgar’s question about the file system, here is the output of df -Th:
vaskarpo@conte-fe00:~ $ df -Th
Filesystem TypeSize Used Avail Use%
Vahid,
i the v1.10 series, the default MPI-IO component was ROMIO based, and
in the v3 series, it is now ompio.
You can force the latest Open MPI to use the ROMIO based component with
mpirun --mca io romio314 ...
That being said, your description (e.g. a hand edited file) suggests
that I/O is not
I will try to reproduce this problem with 3.0.x, but it might take me a
couple of days to get to it.
Since it seemed to have worked with 2.0.x (except for the running out file
handles problem), there is the suspicion that one of the fixes that we
introduced since then is the problem.
What file
On Jan 18, 2018, at 5:53 PM, Vahid Askarpour wrote:
>
> My openmpi3.0.x run (called nscf run) was reading data from a routine Quantum
> Espresso input file edited by hand. The preliminary run (called scf run) was
> done with openmpi3.0.x on a similar input file also edited by hand.
Gotcha.
W
My openmpi3.0.x run (called nscf run) was reading data from a routine Quantum
Espresso input file edited by hand. The preliminary run (called scf run) was
done with openmpi3.0.x on a similar input file also edited by hand.
Vahid
> On Jan 18, 2018, at 6:39 PM, Jeff Squyres (jsquyres)
> wrot
FWIW: If your Open MPI 3.0.x runs are reading data that was written by MPI IO
via Open MPI 1.10.x or 1.8.x runs, that data formats may not be compatible (and
could lead to errors like you're seeing -- premature end of file, etc.).
> On Jan 18, 2018, at 5:34 PM, Vahid Askarpour wrote:
>
> Hi J
Hi Jeff,
I compiled Quantum Espresso/EPW with openmpi-3.0.x. The openmpi was compiled
with intel14.
A preliminary run for EPW using Quantum Espresso crashed with the following
message:
end of file while reading crystal k points
There are 1728 k points in the input file and Quantum Espresso, b
Great. I will try the 3.0.x version to see how it goes.
On a side note, I did manage to run EPW without getting memory leaks using
openmpi-1.8.8 and gcc-4.8.5. These are the tools that apparently worked when
the code was developed as seen on their Test Farm
(http://epw.org.uk/Main/TestFarm).
T
You are correct: 3.0.1 has not been released yet.
However, our nightly snapshots of the 3.0.x branch are available for download.
These are not official releases, but they are great for getting users to test
what will eventually become an official release (i.e., 3.0.1) to see if
particular bugs
Hi Jeff,
I looked for the 3.0.1 version but I only found the 3.0.0 version available for
download. So I thought it may take a while for the 3.0.1 to become available.
Or did I miss something?
Thanks,
Vahid
> On Jan 11, 2018, at 12:04 PM, Jeff Squyres (jsquyres)
> wrote:
>
> Vahid --
>
> W
Vahid --
Were you able to give it a whirl?
Thanks.
> On Jan 5, 2018, at 7:58 PM, Vahid Askarpour wrote:
>
> Gilles,
>
> I will try the 3.0.1rc1 version to see how it goes.
>
> Thanks,
>
> Vahid
>
>> On Jan 5, 2018, at 8:40 PM, Gilles Gouaillardet
>> wrote:
>>
>> Vahid,
>>
>> This loo
Gilles,
I will try the 3.0.1rc1 version to see how it goes.
Thanks,
Vahid
On Jan 5, 2018, at 8:40 PM, Gilles Gouaillardet
mailto:gilles.gouaillar...@gmail.com>> wrote:
Vahid,
This looks like the description of the issue reported at
https://github.com/open-mpi/ompi/issues/4336
The fix is cu
Vahid,
This looks like the description of the issue reported at
https://github.com/open-mpi/ompi/issues/4336
The fix is currently available in 3.0.1rc1, and I will back port the fix fo
the v2.x branch.
A workaround is to use ROM-IO instead of ompio, you can achieve this with
mpirun —mca io ^ompio
You can still give Open MPI 2.1.1 a try. It should be source compatible with
EPW. Hopefully the behavior is close enough that it should work.
If not, please encourage the EPW developers to upgrade. v3.0.x is the current
stable series; v1.10.x is ancient.
> On Jan 5, 2018, at 5:22 PM, Vahid
Thank you Jeff for your suggestion to use the v.2.1 series.
I am attempting to use openmpi with EPW. On the EPW website
(http://epw.org.uk/Main/DownloadAndInstall), it is stated that:
Compatibility of EPW
EPW is tested and should work on the following compilers and libraries:
* gcc640 ser
I forget what the underlying issue was, but this issue just came up and was
recently fixed:
https://github.com/open-mpi/ompi/issues/4345
However, the v1.10 series is fairly ancient -- the fix was not applied to that
series. The fix was applied to the v2.1.x series, and a snapshot tarball
I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using
GCC-6.4.0.
When compiling, I get the following error:
make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
Making all in mca/pml/ucx
make[2]: Entering directory '/home/vaskarpo/bin/openmpi-1.10
39 matches
Mail list logo