Hi Gilles,
Thank you very much for your answer. I modified the code and it worked!
Here is the modified code:
program main
use mpi
integer myid, numprocs, ierr
integer comm1d, nbrbottom, nbrtop, s, e, it
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
Hector,
numprocs and .false. are scalars and MPI_Cart_create expects one dimension
array.
can you fix this and try again ?
Cheers,
Gilles
On Wednesday, October 7, 2015, Hector E Barrios Molano
wrote:
> Hi Open MPI Experts!
>
> I'm using OpenMPI v1.10.0 and get this error when using MPI_CART_C
> On Oct 6, 2015, at 12:41 PM, marcin.krotkiewski
> wrote:
>
>
> Ralph, maybe I was not precise - most likely --cpu_bind does not work on my
> system because it is disabled in SLURM, and is not caused by any problem in
> OpenMPI. I am not certain and I will have to investigate this further,
these flags available in master and v1.10 branches and make sure that ranks
to core allocation is done starting from cpu socket closer to the HCA.
Of course you can have same effect with taskset.
On Mon, Oct 5, 2015 at 8:46 PM, Dave Love wrote:
> Mike Dubman writes:
>
> > what is your command
Ralph, maybe I was not precise - most likely --cpu_bind does not work on
my system because it is disabled in SLURM, and is not caused by any
problem in OpenMPI. I am not certain and I will have to investigate this
further, so please do not waste your time on this.
What do you mean by 'loss o
I’ll have to fix it later this week - out due to eye surgery today. Looks like
something didn’t get across to 1.10 as it should have. There are other
tradeoffs that occur when you go to direct launch (e.g., loss of dynamics
support) - may or may not be of concern to your usage.
> On Oct 6, 201
Hi Open MPI Experts!
I'm using OpenMPI v1.10.0 and get this error when using MPI_CART_CREATE:
simple.f90(10): error #6285: There is no matching specific subroutine for
this generic subroutine call. [MPI_CART_CREATE]
call MPI_CART_CREATE( MPI_COMM_WORLD, 1, numprocs, .false., .true.,
comm1d,
Thanks, Gilles. This is a good suggestion and I will pursue this
direction. The problem is that currently SLURM does not support
--cpu_bind on my system for whatever reasons. I may work towards turning
this option on if that will be necessary, but it would also be good to
be able to do it wit
Thank you both for your suggestion. I still cannot make this work
though, and I think - as Ralph predicted - most problems are likely
related to non-homogeneous mapping of cpus to jobs. But there is
problems even before that part..
If I reserve one entire compute node with SLURM:
salloc --nta
Hi Everyone!
I would like to understand how the checkpoint tools work on OpenMPI, like
BLCR and DMTCP. I would be glad if you could me answer the following
questions:
1) BLCR and DMTCP take checkpoints on the parallel processes. The
checkpoints are taken on a coordinated way? I mean, there is a
s
Gilles,
Yes, it seemed that all was fine with binding in the patched 1.10.1rc1 -
thank you. Eagerly waiting for the other patches, let me know and I will
test them later this week.
Marcin
On 10/06/2015 12:09 PM, Gilles Gouaillardet wrote:
Marcin,
my understanding is that in this case, pa
Marcin,
my understanding is that in this case, patched v1.10.1rc1 is working just
fine.
am I right ?
I prepared two patches
one to remove the warning when binding on one core if only one core is
available,
an other one to add a warning if the user asks a binding policy that makes
no sense with th
On Monday 05 Oct 2015 22:13:13 Jeff Squyres wrote:
> On Oct 3, 2015, at 9:14 AM, Dimitar Pashov wrote:
> > Hi, I have a pet bug causing silent data corruption here:
> >https://github.com/open-mpi/ompi/issues/965
> >
> > which seems to have a fix committed some time later. I've tested
> > v1.1
Marcin,
did you investigate direct launch (e.g. srun) instead of mpirun ?
for example, you can do
srun --ntasks=2 --cpus-per-task=4 -l grep Cpus_allowed_list
/proc/self/status
note, you might have to use the srun --cpu_bind option, and make sure
your slurm config does support that :
srun --n
14 matches
Mail list logo