s?
Peter
- Original Message -
>
> On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> >I would add that the present situation is bound to give problems for
> >some
> >users.
> >It is natural to divide an array in segments, each proce
fa2004
> query: me=3, them=3, size=4, disp=1, base=0x100fa2008
> query: me=3, them=PROC_NULL, size=4, disp=1, base=0x100fa2000
> On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond < jeff.scie...@gmail.com >
> wrote:
> > On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm < hje
overkill", or even "right")
Cheers,
Gilles
On Thursday, February 11, 2016, Jeff Hammond < jeff.scie...@gmail.com > wrote:
On Wed, Feb 10, 2016 at 8:44 AM, Peter Wind < peter.w...@met.no > wrote:
I agree that in practice the best practice would be to use Win
clearly not good at reading/interpreting the standard, so using
> MPI_Win_shared_query is my recommended way to get it work.
> (feel free to call it "bulletproof", "overkill", or even "right")
> Cheers,
> Gilles
> On Thursday, February 11, 2016, Jeff H
is to use MPI_Win_shared_query after
> > MPI_Win_allocate_shared.
>
> > I do not know if current behavior is a bug or a feature...
>
> > Cheers,
>
> > Gilles
>
> > On Wednesday, February 10, 2016, Peter Wind < peter.w...@met.no > wrote:
>
Sorry, This was the wrong thread! please disregard the ast answer (1.8.5 and
1.10.2...)
Peter
- Original Message -
> I have tested 1.8.5 and 1.10.2, both fail. (And Intel and Gnu compilers).
> Peter
> - Original Message -
> > which version of Open MPI is this?
>
> > Thanks
I have tested 1.8.5 and 1.10.2, both fail. (And Intel and Gnu compilers).
Peter
- Original Message -
> which version of Open MPI is this?
> Thanks
> Edgar
> On 2/10/2016 4:13 AM, Delphine Ramalingom wrote:
> > Hello,
>
> > I try to compile a parallel version of hdf5.
>
> > I have
Sorry for that, here is the attachement!
Peter
- Original Message -
> Peter --
>
> Somewhere along the way, your attachment got lost. Could you re-send?
>
> Thanks.
>
>
> > On Feb 10, 2016, at 5:56 AM, Peter Wind wrote:
> >
> > Hi,
> >
- Original Message -
> Peter --
>
> Somewhere along the way, your attachment got lost. Could you re-send?
>
> Thanks.
>
>
> > On Feb 10, 2016, at 5:56 AM, Peter Wind wrote:
> >
> > Hi,
> >
> > Under fortran, MPI_Win_allocate_
Hi,
Under fortran, MPI_Win_allocate_shared is called with a window size of zero for
some processes.
The output pointer is then not valid for these processes (null pointer).
Did I understood this wrongly? shouldn't the pointers be contiguous, so that
for a zero sized window, the pointer should po
ll then try to reproduce the issue and investigate it
> Cheers,
> Gilles
> On Tuesday, February 2, 2016, Peter Wind < peter.w...@met.no > wrote:
> > Thanks Gilles,
>
> > I get the following output (I
onent.
> Cheers,
> Gilles
> On Tuesday, February 2, 2016, Peter Wind < peter.w...@met.no > wrote:
> > Enclosed is a short (< 100 lines) fortran code example that uses shared
> > memory.
>
> > It seems to me it behaves wrongly if openmpi is used.
>
> >
Enclosed is a short (< 100 lines) fortran code example that uses shared memory.
It seems to me it behaves wrongly if openmpi is used.
Compiled with SGI/mpt , it gives the right result.
To fail, the code must be run on a single node.
It creates two groups of 2 processes each. Within each group mem
13 matches
Mail list logo