Am 22.09.2014 um 16:24 schrieb Peter van Heusden:

> On 22/09/2014 15:50, Reuti wrote:
>> Hi,
>> 
>> Am 22.09.2014 um 15:06 schrieb Peter van Heusden:
>> 
>>> I'm running SGE 6.2u5 on Ubuntu 12.04 (64 bit). One of my compute nodes
>>> has 512 GB of RAM, but when I specify this (with e.g. h_vmem=500G in the
>>> complex_values setting for the exec host) and then submit a job that
>>> requires a lot of RAM (e.g. -l h_vmem=100G), I get this response:
>>> 
>>> (-l h_vmem=100G) cannot run at host "bigmemhost.example.com" because it
>>> offers only hc:h_vmem=77309411328.000000
>> It's 72G expressed as bytes - is this the remaining memory in `qhost -F 
>> h_vmem`? Hence the value is correct, just oddly notated?
> Nope, this is what I see:
> 
> HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO 
> SWAPUS
> -------------------------------------------------------------------------------
> bigmemhost.example.com      lx26-amd64     16  7.01  503.9G   52.9G   
> 5.2G   12.9M

Is "h_vmem" defined as a consumable too?

The above is the output of `qhost -F h_vmem`?

-- Reuti


>> --  Reuti
>> 
>> 
>>> If I set the h_vmem to 99G or below I get a meaningful message, e.g.
>>> 
>>> (-l h_vmem=100G) cannot run at host "smallmemhost.example.com" because
>>> it offers only hc:h_vmem=92.000G
>>> 
>>> This definitely seems to be a bug - is there any way around this?
>>> 
>>> Thanks!
>>> Peter
>>> 
>>> _______________________________________________
>>> users mailing list
>>> users@gridengine.org
>>> https://gridengine.org/mailman/listinfo/users
> 


_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to