Hi, in the last few days I ported my entire fortran mpi code to "use
mpif_08". You really did a great job with this interface. However,
since HDF5 still uses integers to handle communicators, I have a
module where I still use "use mpi", and with gfortran 5.3.0 and
openmpi-1.10.2 I got some errors.
Hi, I have been in trouble for a year.
I run a pure MPI (no openMP) Fortran fluid dynamical code on a cluster
of server, and I obtain a strange behaviour by running the code on
multiple nodes.
The cluster is formed by 16 pc (1 pc is a node) with a dual core processor.
Basically, I'm able to run th
e ulimited stacksize in our compute nodes.
>
> You can ask the system administrator to check this for you,
> and perhaps change it in /etc/security/limits.conf to make it
> unlimited or at least larger than the default.
> The Linux shell command "ulimit -a" [bash] or
> "
I have tried to run with a single process (i.e. the entire grid is
contained by one process) and the the command free -m on the compute
node returns
total used free sharedbuffers cached
Mem: 3913 1540 2372 0 49 1234
-
c
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>1. Re: some mpi processes "disappear" on a cluster ofservers
> (John Hearns)
>2. Re: users Digest, Vol 2339, Issue 5 (Andrea Negri)
>
>
> -
e: 2
> Date: Mon, 3 Sep 2012 14:32:48 -0700
> From: Ralph Castain
> Subject: Re: [OMPI users] some mpi processes "disappear" on a cluster
> of servers
> To: Open MPI Users
> Message-ID:
> Content-Type: text/plain; charset=us-ascii
>
> It looks to m
---
>
> Message: 2
> Date: Tue, 04 Sep 2012 10:31:05 -0700
> From: David Warren
> Subject: Re: [OMPI users] some mpi processes "disappear" on a cluster
> of servers
> To: us...@open-mpi.org
> Message-ID: <50463ad9.3030...@atmos.washin
you tested OMPI under RHEL 6 or its variants (CentOS
>>> 6, SL 6)? THP is on by default in RHEL 6 so no matter you want it or
>>> not it's there.
>>
>> Interesting. Indeed, THP is on be default in RHEL 6.x.
>> I run OMPI 1.6.x constantly on RHEL 6.2, and I
und to
>>>>>>>> core 0, 1, or 2 of socket 1?
>>>>>>>>
>>>>>>>> I tried to use a rankfile and have a problem. My rankfile contains
>>>>>>>> the following lines.
>>>>>>>>
>>>&
I'm not an expert of MPI, but I stronly encourage you to use
use mpi
implicit none
This can save a LOT of time in the debug.
On 14 May 2013 18:00, wrote:
> Send users mailing list submissions to
> us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
10 matches
Mail list logo