t; On Tue, Apr 14, 2015 at 02:41:27PM -0400, Andy Riebs wrote:
> >Nick,
> >
> >You may have more luck looking into the OSHMEM layer of Open MPI;
> SHMEM is
> >designed for one-sided communications.
> >
> >BR,
> >Andy
> >
>
Sorry, nevermind.
It seems it has been generalized (found on wiki)
Thanks for the help.
2015-04-14 20:50 GMT+02:00 Nick Papior Andersen :
> Thanks Andy! I will discontinue my hunt in openmpi then ;)
>
> Isn't SHMEM related only to shared memory nodes?
> Or am I wrong?
>
&g
ed communications.
>
> BR,
> Andy
>
>
> On 04/14/2015 02:36 PM, Nick Papior Andersen wrote:
>
> Dear all,
>
> I am trying to implement some features using a one-sided communication
> scheme.
>
> The problem is that I understand the different one-sided communicati
Dear all,
I am trying to implement some features using a one-sided communication
scheme.
The problem is that I understand the different one-sided communication
schemes as this (basic words):
MPI_Get)
fetches remote window memory to a local memory space
MPI_Get_Accumulate)
1. fetches remote window
As it says check the config.log for any error messages. I have not had any
problems using external hwloc on my debian boxes.
2015-03-18 1:30 GMT+00:00 Peter Gottesman :
> Hey all,
> I am trying to compile Open MPI on a 32bit laptop running debian wheezy
> 7.8.0. When I
>
>> ../ompi-master/configu
se required components is not available, then
> the user must build the needed component before proceeding with the
> ScaLAPACK installation."
>
> Thank you,
>
> On Fri, Mar 6, 2015 at 9:36 AM, Nick Papior Andersen > wrote:
>
>> Do you plan to use BLACS for anyth
Do you plan to use BLACS for anything else than scalapack?
Else I would highly recommend you to just compile scalapack 2.0.2 which has
BLACS shipped with it :)
2015-03-06 15:31 GMT+01:00 Irena Johnson :
> Hello,
>
> I am trying to build BLACS for openmpi-1.8.4 and intel/2015.u1
>
> The compilati
t; To: us...@open-mpi.org
> Subject: Re: [OMPI users] configuring a code with MPI/OPENMPI
>
> I also concur with Jeff about asking software specific questions at the
> software-site, abinit already has a pretty active forum:
> http://forum.abinit.org/
> So any questions can also be
I also concur with Jeff about asking software specific questions at the
software-site, abinit already has a pretty active forum:
http://forum.abinit.org/
So any questions can also be directed there.
2015-02-03 19:20 GMT+00:00 Nick Papior Andersen :
>
>
> 2015-02-03 19:12 GMT+00:00 Eli
2015-02-03 19:12 GMT+00:00 Elio Physics :
> Hello,
>
> thanks for your help. I have tried:
>
> ./configure --with-mpi-prefix=/usr FC=ifort CC=icc
>
> But i still get the same error. Mind you if I compile it serially, that
> is ./configure FC=ifort CC=icc
>
> It works perfectly fine.
>
> We do
First try and correct your compilation by using the intel c-compiler AND
the fortran compiler. You should not mix compilers.
CC=icc FC=ifort
Else the config.log is going to be necessary to debug it further.
PS: You could also try and convince your cluster administrator to provide a
more recent com
Because the compiler does not know that you want to send the entire
sub-matrix, passing non-contiguous arrays to a function is, at best,
dangerous, do not do that unless you know the function can handle that.
Do AA(1,1,2) and then it works. (in principle you then pass the starting
memory location a
I have been following this being very interested, I will create a PR for my
branch then.
To be clear, I already did the OMPI change before this discussion came up,
so this will be the one, however the change to other naming schemes is easy.
2014-12-19 7:48 GMT+00:00 George Bosilca :
>
> On Thu, D
Just to drop in,
I can/and will provide whatever interface you want (if you want my
contribution).
However just to help my ignorance,
1. Adam Moody's method still requires a way to create a distinguished
string per processor, i.e. the spilt is entirely done via the string/color,
which then needs
eed any more information
about my setup to debug this, please let me know.
Or am I completely missing something?
I tried looking into the opal/mca/hwloc/hwloc.h, but I have no idea whether
they are correct or not.
If you think, I can make a pull request at its current stage?
2014-11-27 13:22 GMT
; On Nov 27, 2014, at 8:12 AM, Nick Papior Andersen
> wrote:
>
> > Sure, I will make the changes and commit to make them OMPI specific.
> >
> > I will post forward my problems on the devel list.
> >
> > I will keep you posted. :)
> >
> > 2014-11-27 13:58
Sure, I will make the changes and commit to make them OMPI specific.
I will post forward my problems on the devel list.
I will keep you posted. :)
2014-11-27 13:58 GMT+01:00 Jeff Squyres (jsquyres) :
> On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen
> wrote:
>
> > Here i
Dear Ralph (all ;))
In regards of these posts and due to you adding it to your todo list.
I wanted to do something similarly and implemented a "quick fix".
I wanted to create a communicator per node, and then create a window to
allocate an array in shared memory, however, I came to short in the c
You should redo it in terms of George's suggestion, in that way you should
also circumvent the "manual" alignment of data. George's method is the best
generic way of doing it.
As for the -r8 thing, just do not use it :)
And check the interface for the routines used to see why MPIstatus is used.
It is attached in the previous mail.
2014-10-03 16:47 GMT+00:00 Diego Avesani :
> Dear N.,
> thanks for the explanation.
>
> really really sorry, but I am not able to see your example. where is it?
>
> thanks again
>
>
> Diego
>
>
> On 3 October 2014
right?
> What do you suggest as next step?
>
??? The example I sent you worked perfectly.
Good luck!
> I could create a type variable and try to send it from a processor to
> another with MPI_SEND and MPI_RECV?
>
> Again thank
>
> Diego
>
>
> On 3 October 2014 18:04,
Dear Diego,
Instead of instantly going about using cartesian communicators you should
try and create a small test case, something like this:
I have successfully runned this small snippet on my machine.
As I state in the source, the culprit was the integer address size. It is
inherently of type lon
last thing. Not second to
last, really _the_ last thing. :)
I hope I made my point clear, if not I am at a loss... :)
>
>
> On 3 October 2014 17:03, Nick Papior Andersen
> wrote:
>
>> selected_real_kind
>
>
>
>
> Diego
>
>
> _
Might I chip in and ask "why in the name of fortran are you using -r8"?
It seems like you do not really need it, more that it is a convenience flag
for you (so that you have to type less?)?
Again as I stated in my previous mail, I would never do that (and would
discourage the use of it for almost a
**and potentially your MPI job)
>
> Do you know something about this errors?
>
> Thanks again
>
> Diego
>
>
> On 3 October 2014 15:29, Nick Papior Andersen
> wrote:
>
>> Yes, I guess this is correct. Testing is easy! Try testing!
>> As I stated, I do
; TYPES(1)=MPI_INTEGER
> TYPES(2)=MPI_DOUBLE_PRECISION
> TYPES(3)=MPI_DOUBLE_PRECISION
> nBLOCKS(1)=2
> nBLOCKS(2)=2
> nBLOCKS(3)=4
>
> Am I wrong? Do I have correctly understood?
>
Really Really thanks
>
>
> Diego
>
>
> On 3 October 2014 15:10, Nick Papio
ke)+sizeof(dummy%RP(1))+sizeof(dummy%RP(2))
>
> CALL
> MPI_TYPE_CREATE_STRUCT(3,nBLOCKS,DISPLACEMENTS,TYPES,MPI_PARTICLE_TYPE,MPI%ierr)
>
> This is how I compile
>
> mpif90 -r8 *.f90
>
No, that was not what you said!
You said you compiled it using:
mpif90 -r8 -i8 *.f90
&g
_OTHER: known error not in list
> [diedroLap:12267] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
> will now abort,
> [diedroLap:12267] ***and potentially your MPI job)
>
>
>
> What I can do?
> Thanks a lot
>
>
>
> On 3 October 2014 08:15, Nick Papior
If misalignment is the case then adding "sequence" to the data type might
help.
So:
type ::
sequence
integer :: ...
real :: ...
end type
Note that you cannot use the alignment on types with allocatables and
pointers for obvious reasons.
2014-10-03 0:39 GMT+00:00 Kawashima, Takahiro :
> Hi Die
d of 445) and the
>>> process exits abnormally. Anyone has similar experience?
>>>
>>> On Thu, Sep 18, 2014 at 10:07 PM, XingFENG >> <mailto:xingf...@cse.unsw.edu.au>> wrote:
>>>
>>> Thank you for your reply! I am still working on my code
Thu, Sep 18, 2014 at 10:07 PM, XingFENG
> wrote:
>
>> Thank you for your reply! I am still working on my codes. I would update
>> the post when I fix bugs.
>>
>> On Thu, Sep 18, 2014 at 9:48 PM, Nick Papior Andersen <
>> nickpap...@gmail.com> wrote:
>>
>
9-18 13:39 GMT+02:00 Tobias Kloeffel :
> ok i have to wait until tomorrow, they have some problems with the
> network...
>
>
>
>
> On 09/18/2014 01:27 PM, Nick Papior Andersen wrote:
>
> I am not sure whether test will cover this... You should check it...
>
>
>
test to see if it works...
2014-09-18 13:20 GMT+02:00 XingFENG :
> Thanks very much for your reply!
>
> To Sir Jeff Squyres:
>
> I think it fails due to truncation errors. I am now logging information of
> each send and receive to find out the reason.
>
>
>
>
> To
In complement to Jeff, I would add that using asynchronous messages
REQUIRES that you wait (mpi_wait) for all messages at some point. Even
though this might not seem obvious it is due to memory allocation "behind
the scenes" which are only de-allocated upon completion through a wait
statement.
20
efault, level: 9 dev/all, type:
> string)
> Comma-separated list of ranges specifying logical
> cpus allocated to this job [default: none]
>MCA hwloc: parameter "hwloc_base_use_hwthreads
Dear all
maffinity, paffinity parameters have been removed since 1.7.
For the uninitiated is this because it has been digested by the code so as
the code would automatically decide on these values?
For instance I have always been using paffinity_alone=1 for single node
jobs with entire occupatio
36 matches
Mail list logo