Noted. Thanks again
-- Sid
On 18 August 2013 18:40, Ralph Castain wrote:
> It only has to come after MPI_Init *if* you are telling mpirun to bind you
> as well. Otherwise, you could just not tell mpirun to bind (it doesn't by
> default) and then bind anywhere, anytime you like
>
>
> On Aug 18,
It only has to come after MPI_Init *if* you are telling mpirun to bind you as
well. Otherwise, you could just not tell mpirun to bind (it doesn't by default)
and then bind anywhere, anytime you like
On Aug 18, 2013, at 3:24 PM, Siddhartha Jana wrote:
>
> A process can always change its bindi
> A process can always change its binding by "re-binding" to wherever it
> wants after MPI_Init completes.
>
Noted. Thanks. I guess the important thing that I wanted to know was that
the binding needs to happen *after* MPI_Init() completes.
Thanks all
-- Siddhartha
>
>
> On Aug 18, 2013, at 9:
A process can always change its binding by "re-binding" to wherever it wants
after MPI_Init completes.
On Aug 18, 2013, at 9:38 AM, Siddhartha Jana wrote:
> Firstly, I would like my program to dynamically assign it self to one of the
> cores it pleases and remain bound to it until it later re
Firstly, I would like my program to dynamically assign it self to one of
the cores it pleases and remain bound to it until it later reschedules
itself.
*
Ralph Castain wrote:*
*>> "If you just want mpirun to respect an external cpuset limitation, it
already does so when binding - it will bind with
Le 18/08/2013 14:51, Siddhartha Jana a écrit :
>
> If all the above works and does not return errors (you should
> check that
> your application's PID is in /dev/cpuset/socket0/tasks while running),
> bind-to-core won't clash with it, at least when using a OMPI that uses
> hwloc
If you require that a specific rank go to a specific core, then use the
rankfile mapper - you can see explanations on the syntax in "man mpirun"
If you just want mpirun to respect an external cpuset limitation, it already
does so when binding - it will bind within the external limitation
On Au
So my question really boils down to:
How does one ensure that mpirun launches the processes on the "specific"
cores that are expected of them to be bound to.
As I mentioned, if there were a way to specify the cores through the
hostfile, this problem should be solved.
Thanks for all the quick repli
Thanks John. But I have an incredibly small system. 2 nodes - 16 cores each.
2-4 MPI processes. :-)
On 18 August 2013 09:03, John Hearns wrote:
> You really should install a job scheduler.
> There are free versions.
>
> I'm not sure about cpuset support in Gridengine. Anyone?
>
> ___
You really should install a job scheduler.
There are free versions.
I'm not sure about cpuset support in Gridengine. Anyone?
Bug system?
Big system!
On a bug system you can boot the system into a 'boot cpuset'.
So all system processes run in a small number of low numbered cores. Plus
any login sessions. The batch system then crwtes cpusets in the higher
numbeted cores - free from OS interference.
Noted. Thanks. Unfortunately, in my case the cluster is a basic Linux
cluster without any job schedulers.
On 18 August a2013 02:30, John Hearns wrote:
> For information, if you use a batch system such as PbsPro or Torque it can
> be configured to set up the cpuset for a job and start the job w
Hi,
Thanks for the reply,
> My requirements:
> > 1. Avoid the OS from scheduling tasks on cores 0-7 allocated to my
> > process.
> > 2. Avoid rescheduling of processes to other cores.
> >
> > My solution: I use Linux's CPU-shielding.
> > [ Man page:
> > http://www.kernel.org/doc/man-pages/onl
Le 18/08/2013 05:34, Siddhartha Jana a écrit :
> Hi,
>
> My requirement:
> 1. Avoid the OS from scheduling tasks on cores 0-7 allocated to my
> process.
> 2. Avoid rescheduling of processes to other cores.
>
> My solution: I use Linux's CPU-shielding.
> [ Man page:
> http://www.kernel.org/doc/m
For information, if you use a batch system such as PbsPro or Torque it can
be configured to set up the cpuset for a job and start the job within the
cpuset. It will also destroy the cpuset at the end of a job.
Highly useful for job cpu binding as you day and also if you have a machine
running many
During make I get several instruction errors for push, pushl, pop, and popl
at atomic-asm.S , which is included indirectly in asm.c . For example,
for the first reported "Error", the instruction
pushl %ebp
apparently generates the error message
atomic-asm.S:5: Error: invalid instruction suffi
17 matches
Mail list logo