6, 2010 12:24 AM
To: Open MPI Users
Subject: Re: [OMPI users] Fortran derived types
Hi Derek
On Wed, 2010-05-05 at 13:05 -0400, Cole, Derek E wrote:
> In general, even in your serial fortran code, you're already taking a
> performance hit using a derived type.
Do you have any numbers t
In general, even in your serial fortran code, you're already taking a
performance hit using a derived type. Is it really necessary? It might be
easier for you to change your fortran code into more memory friendly structures
and then the MPI part will be easier. The serial code will have the adde
Others may be able to chime in more, because I am no fortran expert, but you
probably will have to copy it into a contiguous block in memory. Working with
derived types is hard, especially if they are not uniform. MPI can probably
technically handle it, but the programming effort is harder. Are
Hi All,
I keep getting an error about running out of MPI_TYPE_MAX and needing to set
the environment variable higher. What is this, and why is it happening? All of
the types and groups, etc that I create during my programs run are freed at the
appropriate times. Making this number 10x bigger ge
Thanks for the ideas. I did finally end up getting this working by sending back
to the master process. It's quite ugly, and added a good bit of MPI to the
code, but it works for now, and I will revisit this later. I am not sure what
the file system is, I think it is XFS, but I don't know much ab
Hi all,
I posted before about doing a domain decomposition on a 3D array in C, and this
is sort of a follow up to that. I was able to get the calculations working
correctly by performing the calculations on XZ sub-domains for all Y dimensions
of the space. I think someone referred to this as a
E; us...@open-mpi.org
Subject: Re: [OMPI users] 3D domain decomposition with MPI
On Thu, 11 Mar 2010 12:44:01 -0500, "Cole, Derek E"
wrote:
> I am replying to this via the daily-digest message I got. Sorry it
> wasn't sooner... I didn't realize I was getting replies
I am replying to this via the daily-digest message I got. Sorry it wasn't
sooner... I didn't realize I was getting replies until I got the digest. Does
anyone know how to change it so I get the emails as you all send them?
>>Unless your computation is so "embarrassingly parallel" that each proc
Hi all. I am relatively new to MPI, and so this may be covered somewhere else,
but I can't seem to find any links to tutorials mentioning any specifics, so
perhaps someone here can help.
In C, I have a 3D array that I have dynamically allocated and access like
Array[x][y][z]. I was hoping to ca