this is most likely a different issue. The bug in the original case is
appearing also on a local file system/disk, it doesn't have to be NSF.
That being said, I would urge to submit a new issue ( or a new email
thread), I would be more than happy to look into your problem as well,
since we sub
Not sure if this is related and I have not had time to investigate it
much or reduce but I am also having issues with 3.0.x. There's a couple
of layers of cgns and hdf5 but I am seeing:
mpirun --mca io romio314 --mca btl self,vader,openib...
-- works perfectly
mpirun --mca btl self,vader,openib.
Concerning the following error
from pw_readschemafile : error # 1
xml data file not found
The nscf run uses files generated by the scf.in run. So I first run scf.in and
when it finishes, I run nscf.in. If you have done this and still get the above
error, then this could be ano
ok, here is what found out so far, will have to stop for now however for
today:
1. I can in fact reproduce your bug on my systems.
2. I can confirm that the problem occurs both with romio314 and ompio.
I *think* the issue is that the input_tmp.in file is incomplete. In both
cases (ompio and
Hi Edgar,
Just to let you know that the nscf run with --mca io ompio crashed like the
other two runs.
Thank you,
Vahid
On Jan 19, 2018, at 12:46 PM, Edgar Gabriel
mailto:egabr...@central.uh.edu>> wrote:
ok, thank you for the information. Two short questions and requests. I have
qe-6.2.1 co
ok, thank you for the information. Two short questions and requests. I
have qe-6.2.1 compiled and running on my system (although it is with
gcc-6.4 instead of the intel compiler), and I am currently running the
parallel test suite. So far, all the tests passed, although it is still
running.
M
To run EPW, the command for running the preliminary nscf run is
(http://epw.org.uk/Documentation/B-dopedDiamond):
~/bin/openmpi-v3.0/bin/mpiexec -np 64
/home/vaskarpo/bin/qe-6.0_intel14_soc/bin/pw.x -npool 64 < nscf.in > nscf.out
So I submitted it with the following command:
~/bin/openmpi-v3.0
thanks, that is interesting. Since /scratch is a lustre file system,
Open MPI should actually utilize romio314 for that anyway, not ompio.
What I have seen however happen on at least one occasions is that ompio
was still used since ( I suspect) romio314 didn't pick up correctly the
configuratio
Hi Vahid,
This may be a red herring, but are you using a redirect or -i for the QE input?
If you are running "pw.x < input" try running with "pw.x -i input".
John
-Original Message-
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Vahid
Askarpour
Sent: Friday, Januar
Gilles,
I have submitted that job with --mca io romio314. If it finishes, I will let
you know. It is sitting in Conte’s queue at Purdue.
As to Edgar’s question about the file system, here is the output of df -Th:
vaskarpo@conte-fe00:~ $ df -Th
Filesystem TypeSize Used Avail Use%
10 matches
Mail list logo