I don't know about your users, but experience has, unfortunately, taught
us to assume that users' jobs are very, very badly-behaved.
I choose to assume that it's incompetence on the part of programmers and
users, rather than malice, though. :-)
Lloyd Brown
Systems Administrator
Fulton Supercomput
Gus Correa writes:
> On 03/27/2014 05:05 AM, Andreas Schäfer wrote:
>>> >Queue systems won't allow resources to be oversubscribed.
[Maybe that meant that resource managers can, and typically do, prevent
resources being oversubscribed.]
>> I'm fairly confident that you can configure Slurm to ove
Gus Correa writes:
> Torque+Maui, SGE/OGE, and Slurm are free.
[OGE certainly wasn't free, but it apparently no longer exists --
another thing Oracle screwed up and eventually dumped.]
> If you build the queue system with cpuset control, a node can be
> shared among several jobs, but the cpus/c
Am 27.03.2014 um 16:31 schrieb Gus Correa:
> On 03/27/2014 05:05 AM, Andreas Schäfer wrote:
>>> >Queue systems won't allow resources to be oversubscribed.
>> I'm fairly confident that you can configure Slurm to oversubscribe
>> nodes: just specify more cores for a node than are actually present.
>
On 03/27/2014 05:05 AM, Andreas Schäfer wrote:
>Queue systems won't allow resources to be oversubscribed.
I'm fairly confident that you can configure Slurm to oversubscribe
nodes: just specify more cores for a node than are actually present.
That is true.
If you lie to the queue system about
On 03/27/2014 10:19 AM, Andreas Schäfer wrote:
On 14:26 Wed 26 Mar , Ross Boylan wrote:
[Main part is at the bottom]
On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
If you have a complex workflow with varying computational loads, then
you might want to take a look at runtime syste
On 14:26 Wed 26 Mar , Ross Boylan wrote:
> [Main part is at the bottom]
> On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> > If you have a complex workflow with varying computational loads, then
> > you might want to take a look at runtime systems which allow you to
> > express this
Heya,
On 19:21 Wed 26 Mar , Gus Correa wrote:
> On 03/26/2014 05:26 PM, Ross Boylan wrote:
> > [Main part is at the bottom]
> > On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> >> On 09:08 Wed 26 Mar , Ross Boylan wrote:
> >>> Second, we do not operate in a batch queuing environ
On 03/26/2014 05:26 PM, Ross Boylan wrote:
[Main part is at the bottom]
On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
Ross-
On 09:08 Wed 26 Mar , Ross Boylan wrote:
On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
On Mar 26, 2014, at 1:31 AM, Andreas Schäfer
[Main part is at the bottom]
On Wed, 2014-03-26 at 19:28 +0100, Andreas Schäfer wrote:
> Ross-
>
> On 09:08 Wed 26 Mar , Ross Boylan wrote:
> > On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> > > On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
...
> > This seems to res
Ross-
On 09:08 Wed 26 Mar , Ross Boylan wrote:
> On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> > On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
> >
> > >> Even when "idle", MPI processes use all the CPU. I thought I remember
> > >> someone saying that they will be
On Wed, 2014-03-26 at 10:27 +, Jeff Squyres (jsquyres) wrote:
> On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
>
> >> Even when "idle", MPI processes use all the CPU. I thought I remember
> >> someone saying that they will be low priority, and so not pose much of
> >> an obstacle to oth
On 3/26/2014 6:45 AM, Andreas Schäfer wrote:
On 10:27 Wed 26 Mar , Jeff Squyres (jsquyres) wrote:
Be aware of a few facts, though:
1. There is a fundamental difference between disabling
hyperthreading in the BIOS at power-on time and simply running one
MPI process per core. Disabling HT a
On Mar 26, 2014, at 6:45 AM, Andreas Schäfer wrote:
>> 1. There is a fundamental difference between disabling
>> hyperthreading in the BIOS at power-on time and simply running one
>> MPI process per core. Disabling HT at power-on allocates more
>> hardware resources to the remaining HT that is l
On 10:27 Wed 26 Mar , Jeff Squyres (jsquyres) wrote:
> Be aware of a few facts, though:
>
> 1. There is a fundamental difference between disabling
> hyperthreading in the BIOS at power-on time and simply running one
> MPI process per core. Disabling HT at power-on allocates more
> hardware re
On Mar 26, 2014, at 1:31 AM, Andreas Schäfer wrote:
>> Even when "idle", MPI processes use all the CPU. I thought I remember
>> someone saying that they will be low priority, and so not pose much of
>> an obstacle to other uses of the CPU.
>
> well, if they're blocking in an MPI call, then they
Ross-
On 20:30 Tue 25 Mar , Ross Boylan wrote:
> Even when "idle", MPI processes use all the CPU. I thought I remember
> someone saying that they will be low priority, and so not pose much of
> an obstacle to other uses of the CPU.
well, if they're blocking in an MPI call, then they'll be do
Even when "idle", MPI processes use all the CPU. I thought I remember
someone saying that they will be low priority, and so not pose much of
an obstacle to other uses of the CPU.
At any rate, my question is whether, if I have processes that spend most
of their time waiting to receive a message, I
18 matches
Mail list logo