Vahid,
You cannot use Fortan's vector subscript with MPI. Are you certain that
the arrays used in your bcast are contiguous ? If not you would either need
to move the data first into a single dimension array (which will then have
the elements contiguously in memory), or define specialized datatyp
Hello,
I am attempting to modify a relatively large code (Quantum Espresso/EPW) and
here I will try to summarize the problem in general terms.
I am using an OPENMPI-compiled fortran 90 code in which, midway through the
code, say 10 points x(3,10) are broadcast across say 4 nodes. The index 3
On 16 October 2016 at 14:50, Gilles Gouaillardet
wrote:
> Out of curiosity, why do you specify both --hostfile and -H ?
> Do you observe the same behavior without --hostfile ~/.mpihosts ?
When I specify only -H like so:
mpirun -H localhost -np 1 prog1 : -H A.lan -np 4 prog2 : -H B.lan -np 4 pro
Hi,
Am 16.10.2016 um 20:34 schrieb Mahmood Naderan:
> Hi,
> I am running two softwares that use OMPI-2.0.1. Problem is that the CPU
> utilization is low on the nodes.
>
>
> For example, see the process information below
>
> [root@compute-0-1 ~]# ps aux | grep siesta
> mahmood 14635 0.0 0.0
Hi,
I am running two softwares that use OMPI-2.0.1. Problem is that the CPU
utilization is low on the nodes.
For example, see the process information below
[root@compute-0-1 ~]# ps aux | grep siesta
mahmood 14635 0.0 0.0 108156 1300 ?S21:58 0:00 /bin/bash
/share/apps/chemistry/
If you want to keep long-waiting MPI processes from clogging your CPU
pipeline and heating up your machines, you can turn blocking MPI
collectives into nicer ones by implementing them in terms of MPI-3
nonblocking collectives using something like the following.
I typed this code straight into this
Out of curiosity, why do you specify both --hostfile and -H ?
Do you observe the same behavior without --hostfile ~/.mpihosts ?
Also, do you have at least 4 cores on both A.lan and B.lan ?
Cheers,
Gilles
On Sunday, October 16, 2016, MM wrote:
> Hi,
>
> openmpi 1.10.3
>
> this call:
>
> mpirun
Hi,
openmpi 1.10.3
this call:
mpirun --hostfile ~/.mpihosts -H localhost -np 1 prog1 : -H A.lan -np
4 prog2 : -H B.lan -np 4 prog2
works, yet this one:
mpirun --hostfile ~/.mpihosts --app ~/.mpiapp
doesn't. where ~/.mpiapp
-H localhost -np 1 prog1
-H A.lan -np 4 prog2
-H B.lan -np 4 prog2
I would like to see if there are any updates re this thread back from 2010:
https://mail-archive.com/users@lists.open-mpi.org/msg15154.html
I've got 3 boxes at home, a laptop and 2 other quadcore nodes . When the
CPU is at 100% for a long time, the fans make quite some noise:-)
The laptop runs t