[OMPI users] fortran problem when mixing "use mpi" and "use mpi_f08" with gfortran 5

2016-05-21 Thread Andrea Negri
Hi, in the last few days I ported my entire fortran mpi code to "use mpif_08". You really did a great job with this interface. However, since HDF5 still uses integers to handle communicators, I have a module where I still use "use mpi", and with gfortran 5.3.0 and openmpi-1.10.2 I got some errors.

[OMPI users] some mpi processes "disappear" on a cluster of servers

2012-08-31 Thread Andrea Negri
Hi, I have been in trouble for a year. I run a pure MPI (no openMP) Fortran fluid dynamical code on a cluster of server, and I obtain a strange behaviour by running the code on multiple nodes. The cluster is formed by 16 pc (1 pc is a node) with a dual core processor. Basically, I'm able to run th

Re: [OMPI users] users Digest, Vol 2339, Issue 5

2012-09-01 Thread Andrea Negri
e ulimited stacksize in our compute nodes. > > You can ask the system administrator to check this for you, > and perhaps change it in /etc/security/limits.conf to make it > unlimited or at least larger than the default. > The Linux shell command "ulimit -a" [bash] or > "

Re: [OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-01 Thread Andrea Negri
I have tried to run with a single process (i.e. the entire grid is contained by one process) and the the command free -m on the compute node returns total used free sharedbuffers cached Mem: 3913 1540 2372 0 49 1234 -

[OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-03 Thread Andrea Negri
c > than "Re: Contents of users digest..." > > > Today's Topics: > >1. Re: some mpi processes "disappear" on a cluster ofservers > (John Hearns) >2. Re: users Digest, Vol 2339, Issue 5 (Andrea Negri) > > > -

Re: [OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-03 Thread Andrea Negri
e: 2 > Date: Mon, 3 Sep 2012 14:32:48 -0700 > From: Ralph Castain > Subject: Re: [OMPI users] some mpi processes "disappear" on a cluster > of servers > To: Open MPI Users > Message-ID: > Content-Type: text/plain; charset=us-ascii > > It looks to m

[OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-05 Thread Andrea Negri
--- > > Message: 2 > Date: Tue, 04 Sep 2012 10:31:05 -0700 > From: David Warren > Subject: Re: [OMPI users] some mpi processes "disappear" on a cluster > of servers > To: us...@open-mpi.org > Message-ID: <50463ad9.3030...@atmos.washin

Re: [OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-07 Thread Andrea Negri
you tested OMPI under RHEL 6 or its variants (CentOS >>> 6, SL 6)? THP is on by default in RHEL 6 so no matter you want it or >>> not it's there. >> >> Interesting. Indeed, THP is on be default in RHEL 6.x. >> I run OMPI 1.6.x constantly on RHEL 6.2, and I

Re: [OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-09 Thread Andrea Negri
und to >>>>>>>> core 0, 1, or 2 of socket 1? >>>>>>>> >>>>>>>> I tried to use a rankfile and have a problem. My rankfile contains >>>>>>>> the following lines. >>>>>>>> >>>&

Re: [OMPI users] users Digest, Vol 2574, Issue 1

2013-05-14 Thread Andrea Negri
I'm not an expert of MPI, but I stronly encourage you to use use mpi implicit none This can save a LOT of time in the debug. On 14 May 2013 18:00, wrote: > Send users mailing list submissions to > us...@open-mpi.org > > To subscribe or unsubscribe via the World Wide Web, visit >