On 05/22/2013 12:37 PM, Ralph Castain wrote:
Well, ROMIO was written by Argonne/MPICH (unfair to point the finger solely at
Rob) and picked up by pretty much everyone. The issue isn't a bug in MPIIO, but
rather
Ok, sorry about that!
thanks for the historical and technical informations!
Eric
I was afraid that was the case. Too bad, because applications (and the
files they use), are getting much too big for the 32 bit limit.
T. Rosmond
On Wed, 2013-05-22 at 09:37 -0700, Ralph Castain wrote:
> On May 22, 2013, at 9:23 AM, Eric Chamberland
> wrote:
>
> > On 05/22/2013 11:33 AM, Tom
On May 22, 2013, at 9:23 AM, Eric Chamberland
wrote:
> On 05/22/2013 11:33 AM, Tom Rosmond wrote:
>> Thanks for the confirmation of the MPIIO problem. Interestingly, we
>> have the same problem when using MPIIO in INTEL MPI. So something
>> fundamental seems to be wrong.
>>
>
> I think but
We’re seeing some abnormal performance behavior when running an OpenMPI 1.4.4
application on RH6.4 using Mellanox OFED 1.5.3. Under certain circumstances,
system CPU starts dominating and performance tails off severely. This behavior
does not happen with the same job run with TCP. Is there
On 5/22/2013 11:34 AM, Paul Kapinos wrote:
On 05/22/13 17:08, Blosch, Edwin L wrote:
Apologies for not exploring the FAQ first.
No comments =)
If I want to use Intel or PGI compilers but link against the OpenMPI
that ships with RedHat Enterprise Linux 6 (compiled with g++ I
presume), are
On 05/22/2013 11:33 AM, Tom Rosmond wrote:
Thanks for the confirmation of the MPIIO problem. Interestingly, we
have the same problem when using MPIIO in INTEL MPI. So something
fundamental seems to be wrong.
I think but I am not sure that it is because the MPI I/O (ROMIO) code is
the same f
On 05/22/13 17:08, Blosch, Edwin L wrote:
Apologies for not exploring the FAQ first.
No comments =)
If I want to use Intel or PGI compilers but link against the OpenMPI that ships
with RedHat Enterprise Linux 6 (compiled with g++ I presume), are there any
issues to watch out for, during l
If you are only using the C API there will be no issues. There are no
guarantees with C++ or fortran.
-Nathan Hjelm
HPC-3, LANL
On Wed, May 22, 2013 at 03:08:31PM +, Blosch, Edwin L wrote:
> Apologies for not exploring the FAQ first.
>
>
>
> If I want to use Intel or PGI compilers but lin
I have experienced the same problem.. and worst, I have discovered a bug
in MPI I/O...
look here:
http://trac.mpich.org/projects/mpich/ticket/1742
and here:
http://www.open-mpi.org/community/lists/users/2012/10/20511.php
Eric
On 05/21/2013 03:18 PM, Tom Rosmond wrote:
Hello:
A colleague an
Apologies for not exploring the FAQ first.
If I want to use Intel or PGI compilers but link against the OpenMPI that ships
with RedHat Enterprise Linux 6 (compiled with g++ I presume), are there any
issues to watch out for, during linking?
Thanks,
Ed
10 matches
Mail list logo