Hi Andrea
I would guess this is a memory problem.
Do you know how much memory each node has?
Do you know the memory that
each MPI process in the CFD code requires?
If the program starts swapping/paging into disk, because of
low memory, those interesting things that you described can happen.
I wo
Hi, I have been in trouble for a year.
I run a pure MPI (no openMP) Fortran fluid dynamical code on a cluster
of server, and I obtain a strange behaviour by running the code on
multiple nodes.
The cluster is formed by 16 pc (1 pc is a node) with a dual core processor.
Basically, I'm able to run th
Thanks, much appreciated.
On Fri, Aug 31, 2012 at 2:37 PM, Ralph Castain wrote:
> I see - well, I hope to work on it this weekend and may get it fixed. If I
> do, I can provide you with a patch for the 1.6 series that you can use until
> the actual release is issued, if that helps.
>
>
> On Aug
I see - well, I hope to work on it this weekend and may get it fixed. If I do,
I can provide you with a patch for the 1.6 series that you can use until the
actual release is issued, if that helps.
On Aug 31, 2012, at 2:33 PM, Brian Budge wrote:
> Hi Ralph -
>
> This is true, but we may not k
Hi Ralph -
This is true, but we may not know until well into the process whether
we need MPI at all. We have an SMP/NUMA mode that is designed to run
faster on a single machine. We also may build our application on
machines where there is no MPI, and we simply don't build the code
that runs the
Come on http://joel-caserus.info/abc.news.php?abusiness=65b5
On Aug 30, 2012, at 11:35 PM, Ammar Ahmad Awan wrote:
> My real problem is that I want to access the fields from the MPI_File
> structure other than the ones provided by the API e.g. the fd_sys.
>
> Atomicity was just one example I used to explain my problem. If MPI_File is
> an opaque struct
(reposted with consolidated information)
I have a test rig comprising 2 i7 systems 8GB RAM with Melanox III
HCA 10G cards
running Centos 5.7 Kernel 2.6.18-274
Open MPI 1.4.3
MLNX_OFED_LINUX-1.5.3-1.0.0.2 (OFED-1.5.3-1.0.0.2):
On a Cisco 24 pt switch
Normal performance is:
$ mpirun --mca btl openi