Vahid, You cannot use Fortan's vector subscript with MPI. Are you certain that the arrays used in your bcast are contiguous ? If not you would either need to move the data first into a single dimension array (which will then have the elements contiguously in memory), or define specialized datatypes to match th memory layout of your array subscript.
George. On Sun, Oct 16, 2016 at 6:55 PM, Vahid Askarpour <vh261...@dal.ca> wrote: > Hello, > > I am attempting to modify a relatively large code (Quantum Espresso/EPW) > and here I will try to summarize the problem in general terms. > > I am using an OPENMPI-compiled fortran 90 code in which, midway through > the code, say 10 points x(3,10) are broadcast across say 4 nodes. The > index 3 refers to x,y,z. For each point, a number of calculations are done > and an array, B(3,20,n) is generated. The integer n depends on the symmetry > of the system and so varies from node to node. > > When I run this code serially, I can print all the correct B values to > file, so I know the algorithm works. When I run it in parallel, I get > numbers that are meaningless. Collecting the points would not help because > I need to collect the B values. I have tried to run that section of the > code on one node by setting the processor index “mpime" equal to “ionode" > or “root” using the following IF statement: > > IF (mpime .eq. root ) THEN > do the calculation and print B > ENDIF > > Neither ionode nor root returns the correct B array. > > What would be the best way to extract the B array? > > Thank you, > > Vahid > > > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users