Re: [OMPI users] Calling a variable from another processor
Hello, Find attached a minimal example - hopefully doing what you intended. Regards Christoph -- Christoph Niethammer High Performance Computing Center Stuttgart (HLRS) Nobelstrasse 19 70569 Stuttgart Tel: ++49(0)711-685-87203 email: nietham...@hlrs.de http://www.hlrs.de/people/niethammer - Ursprüngliche Mail - Von: "Pradeep Jha" An: "Open MPI Users" Gesendet: Freitag, 10. Januar 2014 10:23:40 Betreff: Re: [OMPI users] Calling a variable from another processor Thanks for your responses. I am still not able to figure it out. I will further simply my problem statement. Can someone please help me with a fortran90 code for that. 1) I have N processors each with an array A of size S 2) On any random processor (say rank X), I calculate the two integer values, Y and Z. (0<=Y One sided is quite simple to understand. It is like file io. You read/write (get/put) to a memory object. If you want to make it hard to screw up, use passive target bss wrap you calls in lock/unlock so every operation is globally visible where it's called. I've never deadlocked RMA while p2p is easy to hang for nontrivial patterns unless you only do nonblocking plus waitall. If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM implementations over MPI-3 already (I wrote both...). The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2 RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3 and OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to work, but these are readily available on every platform normal people use (BGQ is the only system missing, and that will be resolved soon). I've run MPI-3 on my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI (MPICH). Best, Jeff Sent from my iPhone > On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" < jsquy...@cisco.com > > wrote: > > MPI one-sided stuff is actually pretty complicated; I wouldn't suggest it for > a beginner (I don't even recommend it for many MPI experts ;-) ). > > Why not look at the MPI_SOURCE in the status that you got back from the > MPI_RECV? In fortran, it would look something like (typed off the top of my > head; forgive typos): > > - > integer, dimension(MPI_STATUS_SIZE) :: status > ... > call MPI_Recv(buffer, ..., status, ierr) > - > > The rank of the sender will be in status(MPI_SOURCE). > > >> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < nietham...@hlrs.de > >> wrote: >> >> Hello, >> >> I suggest you have a look onto the MPI one-sided functionality (Section 11 >> of the MPI Spec 3.0). >> Create a window to allow the other processes to access the arrays A directly >> via MPI_Get/MPI_Put. >> Be aware of synchronization which you have to implement via MPI_Win_fence or >> manual locking. >> >> Regards >> Christoph >> >> -- >> >> Christoph Niethammer >> High Performance Computing Center Stuttgart (HLRS) >> Nobelstrasse 19 >> 70569 Stuttgart >> >> Tel: ++49(0)711-685-87203 >> email: nietham...@hlrs.de >> http://www.hlrs.de/people/niethammer >> >> >> >> - Ursprüngliche Mail - >> Von: "Pradeep Jha" < prad...@ccs.engg.nagoya-u.ac.jp > >> An: "Open MPI Users" < us...@open-mpi.org > >> Gesendet: Donnerstag, 9. Januar 2014 12:10:51 >> Betreff: [OMPI users] Calling a variable from another processor >> >> >> >> >> >> I am writing a parallel program in Fortran77. I have the following problem: >> 1) I have N number of processors. >> 2) Each processor contains an array A of size S. >> 3) Using some function, on every processor (say rank X), I calculate the >> value of two integers Y and Z, where Z> different on every processor) >> 4) I want to get the value of A(Z) on processor Y to processor X. >> >> I thought of first sending the numerical value X to processor Y from >> processor X and then sending A(Z) from processor Y to processor X. But it is >> not possible as processor Y does not know the numerical value X and so it >> won't know from which processor to receive the numerical value X from. >> >> I tried but I haven't been able to come up with any code which can implement >> this action. So I am not posting any codes. >> >> Any suggestions? >> >> ___ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users >> ___ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users ___ users mailing list us...@open-mpi.org http://www.open-mpi
Re: [OMPI users] CXX=no in config.status, breaks mpic++ wrapper
Jed -- Yes, you're right. This is something Brian bugged me about a few months ago, and I'm sorry to say that it hasn't bubbled up high enough in my priority list to look into yet. :-\ The issue is that we decided to stop building the MPI C++ bindings by default on the trunk (this does not, and will not, affect v1.7/v1.7 -- the C++ bindings are still built by default over there). We need to decouple the decision to build the C++ bindings from setting up the C++ wrapper compiler, and that just hasn't been done yet. A workaround for now is to --enable-mpi-cxx, which will setup the MPI C++ bindings and setup the mpicxx wrapper compiler properly. On Jan 14, 2014, at 7:33 PM, Jed Brown wrote: > With ompi-git from Monday (7e023a4ebf1aeaa530f79027d00c1bdc16b215fd), > configure is putting "compiler=no" in > ompi/tools/wrappers/mpic++-wrapper-data.txt: > > # There can be multiple blocks of configuration data, chosen by > # compiler flags (using the compiler_args key to chose which block > # should be activated. This can be useful for multilib builds. See the > # multilib page at: > #https://svn.open-mpi.org/trac/ompi/wiki/compilerwrapper3264 > # for more information. > > project=Open MPI > project_short=OMPI > version=1.9a1 > language=C++ > compiler_env=CXX > compiler_flags_env=CXXFLAGS > compiler=no > preprocessor_flags= > compiler_flags_prefix= > compiler_flags=-pthread > linker_flags= -Wl,-rpath -Wl,@{libdir} -Wl,--enable-new-dtags > # Note that per https://svn.open-mpi.org/trac/ompi/ticket/3422, we > # intentionally only link in the MPI libraries (ORTE, OPAL, etc. are > # pulled in implicitly) because we intend MPI applications to only use > # the MPI API. > libs= -lmpi > libs_static= -lmpi -lopen-rte -lopen-pal -lm -lnuma -lpciaccess -ldl > dyn_lib_file=libmpi.so > static_lib_file=libmpi.a > required_file= > includedir=${includedir} > libdir=${libdir} > > > This breaks the wrapper: > > $ /path/to/mpic++ > -- > The Open MPI wrapper compiler was unable to find the specified compiler > no in your PATH. > > Note that this compiler was either specified at configure time or in > one of several possible environment variables. > -- > > > Attaching logs because it's not obvious to me what is going wrong. > Automake-1.14.1 and autoconf-2.69. > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
[OMPI users] How to use non-primitive types with Java binding
Hi, Is it possible to use non-primitive types with MPI operations in OpenMPI's Java binding? At the moment in the trunk I only see Datatypes for primitive kinds. Thank you, Saliya -- Saliya Ekanayake esal...@gmail.com
Re: [OMPI users] How to use non-primitive types with Java binding
Hi, If you are talking about types as ArrayList, it is not possible, because the Double (D uppercase) is an object which encapsulates a double. And the elements of an ArrayList are references (pointers) to Java objects. You can use complex types but you must create them with the Datatype methods (createVector, createStruct,...). And the buffers that hold the data must be arrays of a primitive type or direct buffers. Regards, Oscar Quoting Saliya Ekanayake : Hi, Is it possible to use non-primitive types with MPI operations in OpenMPI's Java binding? At the moment in the trunk I only see Datatypes for primitive kinds. Thank you, Saliya -- Saliya Ekanayake esal...@gmail.com This message was sent using IMP, the Internet Messaging Program.
[OMPI users] hosfile issue of openmpi-1.7.4rc2
Hi Ralph, I encountered the hostfile issue again where slots are counted by listing the node multiple times. This should be fixed by r29765 - Fix hostfile parsing for the case where RMs count slots The difference is using RM or not. At that time, I executed mpirun through Torque manager. This time I executed it directly from command line as shown at the bottom, where node05,06 has 8 cores. Then, I checked source files arroud it and found that the line 151-160 in plm_base_launch_support.c caused this issue. As node->slots is already counted in hostfile.c @ r29765 even when node->slots_given is false, I think this part of plm_base_launch_support.c would be unnecesarry. orte/mca/plm/base/plm_base_launch_support.c @ 30189: 151 } else { 152 /* set any non-specified slot counts to 1 */ 153 for (i=0; i < orte_node_pool->size; i++) { 154 if (NULL == (node = (orte_node_t*)opal_pointer_array_get_item(orte_node_pool, i))) { 155 continue; 156 } 157 if (!node->slots_given) { 158 node->slots = 1; 159 } 160 } 161 } Removing this part, it works very well, where the function of orte_set_default_slots is still alive. I think this would be better for the compatible extention of openmpi-1.7.3. Regards, Tetsuya Mishima [mishima@manage work]$ cat pbs_hosts node05 node05 node05 node05 node05 node05 node05 node05 node06 node06 node06 node06 node06 node06 node06 node06 [mishima@manage work]$ mpirun -np 4 -hostfile pbs_hosts -cpus-per-proc 4 -report-bindings myprog [node05.cluster:22287] MCW rank 2 bound to socket 1[core 4[hwt 0]], socket 1[core 5[hwt 0]], socket 1[core 6[hwt 0]], so cket 1[core 7[hwt 0]]: [./././.][B/B/B/B] [node05.cluster:22287] MCW rank 3 is not bound (or bound to all available processors) [node05.cluster:22287] MCW rank 0 bound to socket 0[core 0[hwt 0]], socket 0[core 1[hwt 0]], socket 0[core 2[hwt 0]], so cket 0[core 3[hwt 0]]: [B/B/B/B][./././.] [node05.cluster:22287] MCW rank 1 is not bound (or bound to all available processors) Hello world from process 0 of 4 Hello world from process 1 of 4 Hello world from process 3 of 4 Hello world from process 2 of 4
Re: [OMPI users] How to use non-primitive types with Java binding
Thank you Oscar. I was using an earlier nightly tarball and in it there was MPI.OBJECT datatype, which I could use with any serializable complex object. It seems this is no longer supported as per your answer or did I get it wrong? Thank you, Saliya On Thu, Jan 16, 2014 at 5:22 PM, Oscar Vega-Gisbert wrote: > Hi, > > If you are talking about types as ArrayList, it is not possible, > because the Double (D uppercase) is an object which encapsulates a double. > And the elements of an ArrayList are references (pointers) to Java objects. > > You can use complex types but you must create them with the Datatype > methods (createVector, createStruct,...). And the buffers that hold the > data must be arrays of a primitive type or direct buffers. > > Regards, > Oscar > > > Quoting Saliya Ekanayake : > > Hi, >> >> Is it possible to use non-primitive types with MPI operations in OpenMPI's >> Java binding? At the moment in the trunk I only see Datatypes for >> primitive >> kinds. >> >> Thank you, >> Saliya >> >> -- >> Saliya Ekanayake esal...@gmail.com >> >> > > > > This message was sent using IMP, the Internet Messaging Program. > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > -- Saliya Ekanayake esal...@gmail.com Cell 812-391-4914 Home 812-961-6383 http://saliya.org