Hi all
I encountered a problem about mpirun and SSH when using OMPI 1.10.0 compiled
with gcc, running on centos7.2.
When I execute mpirun on my 2 node cluster, I get the following errors pasted
below.
[douraku@master home]$ mpirun -np 12 a.out
Permission denied (publickey,gssapi-keyex,gssapi-w
Thank you, Ralph for the detailed explanation.
On Thu, May 19, 2016 at 7:36 PM, Ralph Castain wrote:
> No!! A “slot” is purely a bookkeeping construct that schedulers use to
> tell you how many procs you can run. It has nothing to do with a core or
> any other physical resource.
>
> It is true t
I just fixed it today - waiting for Nathan to provide one more element before
committing
> On May 21, 2016, at 1:17 PM, dpchoudh . wrote:
>
> Hello all
>
> I have started noticing this message since yesterday on builds from the
> master branch. Any simple mpirun command, such as:
>
> mpirun
Hello all
I have started noticing this message since yesterday on builds from the
master branch. Any simple mpirun command, such as:
mpirun -np 2 -hostfile ~/hostfile -mca btl self,tcp hostname
generates a warning/error like this:
*Duplicate cmd line entry mca*
The hostfile, in my case, is jus
16d9f71d01cc should provide a fix for this issue.
George.
On Sat, May 21, 2016 at 12:08 PM, Akihiro Tabuchi <
tabu...@hpcs.cs.tsukuba.ac.jp> wrote:
> Hi Gilles,
>
> Thanks for your quick response and patch.
>
> After applying the patch to 1.10.2, the test code and our program which
> uses nes
Best performance per dollar for CPU systems is usually one generation past
mid core count single socket system, such as Intel Haswell or Broadwell
Core i7. Might get lucky and find eg 12-core Xeon processors cheap now.
If you want lots of MPI ranks per dollar, look at Intel Knights Corner Xeon
Phi
Hi, in the last few days I ported my entire fortran mpi code to "use
mpif_08". You really did a great job with this interface. However,
since HDF5 still uses integers to handle communicators, I have a
module where I still use "use mpi", and with gfortran 5.3.0 and
openmpi-1.10.2 I got some errors.
Hi Gilles,
Thanks for your quick response and patch.
After applying the patch to 1.10.2, the test code and our program which uses nested hvector type ran
without error.
I hope the patch will be applied to future releases.
Regards,
Akihiro
On 2016/05/21 23:15, Gilles Gouaillardet wrote:
Here
Here are attached two patches (one for master, one for v1.10)
please consider these as experimental ones :
- they cannot hurt
- they might not always work
- they will likely allocate a bit more memory than necessary
- if something goes wrong, it will hopefully be caught soon enough in
a new assert
Tabuchi-san,
thanks for the report.
this is indeed a bug i was able to reproduce on my linux laptop (for
some unknown reasons, there is no crash on OS X )
ompi_datatype_pack_description_length malloc 88 bytes for the datatype
description, but 96 bytes are required.
this causes a memory corruptio
Hi,
At OpenMPI 1.10.2, MPI_Type_free crashes with a many nested derived type after
using MPI_Put/Get
with the datatype as target_datatype.
The test code is attached.
In the code, MPI_Type_free crashes if N_NEST >= 4.
This problem occurs at OpenMPI 1.8.5 or later.
There is no problem at OpenMPI 1
11 matches
Mail list logo