Hi Siegmar,
a similar issue was reported in mpich with xlf compilers :
http://trac.mpich.org/projects/mpich/ticket/2144
They concluded this is a compiler issue (e.g. the compiler does not
implement TS 29113 subclause 8.1)
Jeff,
i made PR 315 https://github.com/open-mpi/ompi/pull/315
f08 binding
Hi Gilles,
> a similar issue was reported in mpich with xlf compilers :
> http://trac.mpich.org/projects/mpich/ticket/2144
>
> They concluded this is a compiler issue (e.g. the compiler does not
> implement TS 29113 subclause 8.1)
Thank you very much. I'll report the problem to Oracle and perhap
Hi Brice,
- Mensaje original -
> De: "Brice Goglin"
> CC: "Open MPI Users"
> Enviado: Jueves, 11 de Diciembre 2014 19:46:44
> Asunto: Re: [OMPI users] OpenMPI 1.8.4 and hwloc in Fedora 14 using a beta
> gcc 5.0 compiler.
>
> This problem was fixed in hwloc upstream recently.
>
> ht
Le 15/12/2014 10:35, Jorge D'Elia a écrit :
> Hi Brice,
>
> - Mensaje original -
>> De: "Brice Goglin"
>> CC: "Open MPI Users"
>> Enviado: Jueves, 11 de Diciembre 2014 19:46:44
>> Asunto: Re: [OMPI users] OpenMPI 1.8.4 and hwloc in Fedora 14 using a beta
>> gcc 5.0 compiler.
>>
>> This p
FWIW, if it would be easier, we can just pull a new hwloc tarball -- that's how
we've done it in the past (vs. trying to pull individual patches). It's also
easier to pull a release tarball, because then we can say "hwloc vX.Y.Z is in
OMPI vA.B.C", rather than have to try to examine/explain wha
Hi Gilles,
here is a simple setup to have valgrind caomplains now:
export
too_long=./this/is/a_very/long/path/that/contains/a/not/so/long/filename/but/trying/to/collectively/mpi_file_open/it/you/will/have/a/memory/corruption/resulting/of/invalide/writing/or/reading/past/the/end/of/one/or/some/h
Sorry, I should have been clearer - that was indeed what I was expecting to
see. I guess it begs the question - should we just update to something like
1.9 so Brice doesn't have to worry about back porting future fixes this far
back?
On Mon, Dec 15, 2014 at 7:22 AM, Jeff Squyres (jsquyres) wrot
It's your call, v1.8 RM. :-)
On the one hand, we've tried to stick with a consistent version of hwloc
through an entire version series.
But on the other hand, hwloc is wholly internal and shouldn't be visible to
apps. So it *might* be harmless to upgrade it.
The only real question is: will u
Le 15/12/2014 16:39, Jeff Squyres (jsquyres) a écrit :
> The only real question is: will upgrading hwloc break anything else inside
> the v1.8 tree? E.g., did new hwloc abstractions/APIs come in after v1.7 that
> we've adapted to on the trunk, but didn't adapt to on the v1.8 branch?
I wouldn't
Yeah, I recall it was quite clean when I did the upgrade on the trunk. I
may take a pass at it and see if anything breaks since it is so easy now to
do. :-)
On Mon, Dec 15, 2014 at 8:17 AM, Brice Goglin wrote:
>
> Le 15/12/2014 16:39, Jeff Squyres (jsquyres) a écrit :
> > The only real question
George,
Thanks for the tip. In fact, calling mpi_comm_spawn right away with MPI
_COMM_SELF
has worked for me just as well -- no subgroups needed at all.
I am testing this openmpi app named "siesta" in parallel. The source code
is available,
so making it "spawn ready" by adding the pair mpi_comm_g
You should be able to just include that in your argv that you pass to the
Comm_spawn API.
On Mon, Dec 15, 2014 at 9:27 AM, Alex A. Schmidt wrote:
>
> George,
>
> Thanks for the tip. In fact, calling mpi_comm_spawn right away with MPI
> _COMM_SELF
> has worked for me just as well -- no subgroups
Ralph,
I guess you mean "call mpi_comm_spawn( 'siesta', '< infile' , 2 ,...)"
to execute 'mpirun -n 2 siesta < infile' on the spawnee side. That was
my first choice. Well, siesta behaves as if no stdin file was present...
Alex
2014-12-15 17:07 GMT-02:00 Ralph Castain :
>
> You should be able t
Eric,
thanks for the simple test program.
i think i see what is going wrong and i will make some changes to avoid
the memory overflow.
that being said, there is a hard coded limit of 256 characters, and your
path is bigger than 300 characters.
bottom line, and even if there is no more memory ove
Eric and all,
That is clearly a limitation in romio, and this is being tracked at
https://trac.mpich.org/projects/mpich/ticket/2212
in the mean time, what we can do in OpenMPI is update
mca_io_romio_file_open() and fails with a user friendly error message
if strlen(filename) is larger that 225.
15 matches
Mail list logo