Hello,
I am attempting to modify a relatively large code (Quantum Espresso/EPW) and
here I will try to summarize the problem in general terms.
I am using an OPENMPI-compiled fortran 90 code in which, midway through the
code, say 10 points x(3,10) are broadcast across say 4 nodes. The index 3
Hi George and Jeff,
Thank you for taking the time to respond to my query. You did inspire me in the
right direction. Some of the variables involved in the calculation of B were
not broadcast. In addition,
a double do-loop combined with an IF statement was overwriting on the correct
B values. I
I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using
GCC-6.4.0.
When compiling, I get the following error:
make[2]: Leaving directory '/home/vaskarpo/bin/openmpi-1.10.7/ompi/mca/pml/ob1'
Making all in mca/pml/ucx
make[2]: Entering directory '/home/vaskarpo/bin/openmpi-1.10
yet, so the nightly snapshots
for the v3.0.x and v3.1.x branches do not yet contain this fix).
On Jan 5, 2018, at 1:34 PM, Vahid Askarpour
mailto:vh261...@dal.ca>> wrote:
I am attempting to install openmpi-1.10.7 on CentOS Linux (7.4.1708) using
GCC-6.4.0.
When compiling, I get the follo
ive Open MPI 2.1.1 a try. It should be source compatible with
EPW. Hopefully the behavior is close enough that it should work.
If not, please encourage the EPW developers to upgrade. v3.0.x is the current
stable series; v1.10.x is ancient.
> On Jan 5, 2018, at 5:22 PM, Vahid Askarpour
> Vahid --
>
> Were you able to give it a whirl?
>
> Thanks.
>
>
>> On Jan 5, 2018, at 7:58 PM, Vahid Askarpour wrote:
>>
>> Gilles,
>>
>> I will try the 3.0.1rc1 version to see how it goes.
>>
>> Thanks,
>>
>> Vahid
/
They are organized by date.
On Jan 11, 2018, at 11:34 AM, Vahid Askarpour
mailto:vh261...@dal.ca>> wrote:
Hi Jeff,
I looked for the 3.0.1 version but I only found the 3.0.0 version available for
download. So I thought it may take a while for the 3.0.1 to become available.
Or did
fixed. This is one of the benefits of
> open source. :-)
>
> Here's where the 3.0.1 nightly snapshots are available for download:
>
>https://www.open-mpi.org/nightly/v3.0.x/
>
> They are organized by date.
>
>
>> On Jan 11, 2018, at 11:34 AM, Vahid Askarp
res)
> wrote:
>
> FWIW: If your Open MPI 3.0.x runs are reading data that was written by MPI IO
> via Open MPI 1.10.x or 1.8.x runs, that data formats may not be compatible
> (and could lead to errors like you're seeing -- premature end of file, etc.).
>
>
>> On Jan 18
t; Thanks
>>
>> Edgar
>>
>>
>> On 1/18/2018 5:17 PM, Jeff Squyres (jsquyres) wrote:
>>>
>>> On Jan 18, 2018, at 5:53 PM, Vahid Askarpour wrote:
>>>>
>>>> My openmpi3.0.x run (called nscf run) was reading data from a
and I will still try to reproduce your problem on my system.
Thanks
Edgar
On 1/19/2018 7:15 AM, Vahid Askarpour wrote:
Gilles,
I have submitted that job with --mca io romio314. If it finishes, I will let
you know. It is sitting in Conte’s queue at Purdue.
As to Edgar’s question about the fil
e same data set that you are using? You could upload to a webpage or similar
and just send me the link.
The second question/request, could you rerun your tests one more time, this
time forcing using ompio? e.g. --mca io ompio
Thanks
Edgar
On 1/19/2018 10:32 AM, Vahid Askarpour wrote:
To run EP
e (run-tests-cp-parallel, run-tests-pw-parallel,
run-tests-ph-parallel, run-tests-epw-parallel ), and they all passed with ompio
(and romio314 although I only ran a subset of the tests with romio314).
Thanks
Edgar
-
On 01/19/2018 11:44 AM, Vahid Askarpour wrote:
Hi Edgar,
Just to let you kn
probably need some help here from somebody who knows the
runtime better than me on what could go wrong at this point.
Thanks
Edgar
On 1/19/2018 1:22 PM, Vahid Askarpour wrote:
Concerning the following error
from pw_readschemafile : error # 1
xml data file not found
The n
he IOF part, but I am pretty sure this has already
been fixed.
Does the issue also occur with GNU compilers ?
There used to be an issue with Intel Fortran runtime (short read/write were
silently ignored) and that was also fixed some time ago.
Cheers,
Gilles
Vahid Askarpour mailto:vh261...@
Intel compilers
Cheers,
Gilles
Vahid Askarpour mailto:vh261...@dal.ca>> wrote:
Gilles,
I have not tried compiling the latest openmpi with GCC. I am waiting to see how
the intel version turns out before attempting GCC.
Cheers,
Vahid
On Jan 23, 2018, at 9:33 AM, Gilles
>
>
>> On Jan 30, 2018, at 2:09 PM, Vahid Askarpour wrote:
>>
>> This is just an update on how things turned out with openmpi-3.0.x.
>>
>> I compiled both EPW and openmpi with intel14. In the past, EPW crashed for
>> both intel16 and 17. However, with
Hi,
I am running a fortran code (Perturbo) compiled in hybrid openmp/openmpi. The
code runs on 2 nodes (128 processors) with 32 MPI processes and 4 threads/MPI
process. I am attempting to verify that a variable involved in the calculations
in all the MPI processes and threads has the same value
;
> On 2/1/23 00:50, Vahid Askarpour via users wrote:
>> Hi,
>>
>> I am running a fortran code (Perturbo) compiled in hybrid openmp/openmpi.
>> The code runs on 2 nodes (128 processors) with 32 MPI processes and 4
>> threads/MPI process. I am attempting to
19 matches
Mail list logo