Would be the use of mlockall helpful for this approach?
From: Audet, Martin
Sent: Monday, June 20, 2016 11:15 PM
To: mailto:us...@open-mpi.org
Subject: Re: [OMPI users] Avoiding the memory registration costs by having
memory always registered, is it possible with Linux ?
Thanks Jeff for your
Thanks Jeff for your answer.
It is sad that the approach I mentioned of having all memory registered for
user process on cluster nodes didn't become more popular.
I still believe that such an approach would shorten the executed code path in
MPI libraries, reduce message latency, increase the co
Hi Rizwan,
If you need to rewrite your fork system calls, you may want to check out
mpi's spawn functionality. I recently found out about it and it's really
useful if you haven't heard of it already. I am using it through python's
mpi4py and it seems to be working well.
Best,
Jason
Jason Maldoni
There is no guarantee that will work on a multiple mode job.
tcp should be fine, infiniband might not work.
the best way to be on the safe side is you rewrite your MPI app so it does
not invoke the fork system call. this is generally invoked directly, or via
the "system" subroutine.
Cheers,
Gill
Hi Gilles,
Thanks for the support. :)
This is a test which I am running on a single node, but I am intending to run
calculations on multiple nodes. You mean it wouldn't work on multiple nodes? If
I run on multiple nodes, how can I avoid these errors then? I would just test
it for multiple node
There are two points here
1. slurm(stepd) is unable to put the processes in the (null) cgroup.
at first glance, this looks more of a slurm jus configuration
2. the MPI process forking. though this has a much better support than in
the past, that might not always work, especially with fast interc
Dear MPI users,
I am getting the errors below while submitting/executing following script,
#!/bin/sh
#SBATCH -p short
#SBATCH -J layers
#SBATCH -n 12
#SBATCH -N 1
#SBATCH -t 01:30:00
#SBATCH --mem-per-cpu=2500
#SBATCH --exclusive
#SBATCH --mail-type=END
#SBATCH --mail-user=rizwan.ah...@aalto.fi
#