Hey guys,

I need to write an implementation of a 4D array with a variable 4th
dimension based on where in 3D you are. So it's basically like a normal
rectangular prism, except for each unit it might have a variable number of
values. The program is also supposed to communicate by sending ghost nodes
between each slice, in order for the actual data-modifying code to work. The
data-modifying code is to be added by others depending on what they need to
do with this 4d array.

 So what I am really asking is how does MPI work when you have different
objects on different computers and you try sending stuff between the
objects?

Just some explanations for my code, which is attached, in case you can give
some code-specific tips:

   - Nx2, Nz2, Ny2 are the lengths of the big prism
   - DOFArr holds how many values there will be for each space in 3D, by
   transforming the 3D coordinates into 1D this way: index of
   DOFArr=i+Nx*j+Nx*Ny*k
   - So the way i was thinking of doing is for the master to get the big
   array, and split into relatively equal parts and send each part to a node.
   The node would then do whatever it had to do, send ghost nodes to its
   neighbors, maybe do whatever it had to do again and in the end it would send
   everything back to the master node, which would take the pieces of array and
   put them together.


I am also a bit confused with when do you have to call MPI_Finaliza(). I
mean, for it to run, the program required me to put it into the constructor,
and if I call print() at the and of the constructor, it would tell me this:

job aborted:

rank: node: exit code[: error message]

0: C7June2010: 123

1: C7June2010: -1073740777: process 1 exited without calling finalize

2: C7June2010: -1073740777: process 2 exited without calling finalize


If i would place it after Finalize() nothing would show when i run the exe.


I just finished grade 12, and got a summer job at a research lab, and this
is my project. Thanks a lot for your help.

Alex

Attachment: main.cpp
Description: Binary data

Reply via email to