Hi,

> Am 19.05.2017 um 18:10 schrieb Marco Donauer <mdona...@univa.com>:
> 
> Hi Juan,
> 
> sure this is right. We always recommend to try to keep the number of queues 
> low and if possible problems, which can be resolved with an other solution 
> shouldn't be resolved via setting up unnecessary queues.

Just for curiosity: was the number of scratch directories in some way enhanced 
in Univa GE? As I mentioned: I use different queues only to have different 
locations of the $TMPDIR on one and the same node (depending on the job 
requirements: traditional HD, SSD or RAM-Disk). Otherwise I could live with one 
queue.

https://arc.liv.ac.uk/trac/SGE/ticket/1290

— Reuti


> Each queue and this is not just a problem with Univa Grid Engine increases 
> the 
> computational load for the scheduler. In small clusters maybe not a problem, 
> but running it in huge compute clusters, with many 100k of jobs, then each 
> queue has a negative impact. For smaller cluster with just a few 10s hosts or 
> 10k of jobs, not really visible.
> An other point is, that a low number of queues, means, less management and 
> administration effort.
> The best case is to have 1 cluster queue.
> 
> Regards,
> Marco
> 
> 
> On Friday 19 May 2017 15:52:36 juanesteban.jime...@mdc-berlin.de wrote:
>> Me, nothing. My colleagues attended your training and were told not to
>> create individual queues for solving issues like gpu node access. They are
>> using Univa on their cluster, I am not.
>> 
>> Mfg,
>> Juan Jimenez
>> System Administrator, HPC
>> MDC Berlin / IT-Dept.
>> Tel.: +49 30 9406 2800
>> 
>> 
>> ________________________________________
>> From: Marco Donauer [mdona...@univa.com]
>> Sent: Friday, May 19, 2017 17:49
>> To: Jimenez, Juan Esteban; Reuti
>> Cc: SGE-discuss@liv.ac.uk
>> Subject: Re: [SGE-discuss] GPUs as a resource
>> 
>> Hi Juan,
>> 
>> what have you been told by Univa,
>> regarding GPUs?
>> 
>> Regards
>> Marco
>> 
>> 
>> Am 19. Mai 2017 17:12:42 schrieb "juanesteban.jime...@mdc-berlin.de"
>> 
>> <juanesteban.jime...@mdc-berlin.de>:
>>> I am just telling you what my colleagues say they were told by Univa.
>>> 
>>> Mfg,
>>> Juan Jimenez
>>> System Administrator, HPC
>>> MDC Berlin / IT-Dept.
>>> Tel.: +49 30 9406 2800
>>> 
>>> 
>>> ________________________________________
>>> From: Reuti [re...@staff.uni-marburg.de]
>>> Sent: Friday, May 19, 2017 17:01
>>> To: Jimenez, Juan Esteban
>>> Cc: William Hay; SGE-discuss@liv.ac.uk
>>> Subject: Re: [SGE-discuss] GPUs as a resource
>>> 
>>>> Am 19.05.2017 um 16:35 schrieb juanesteban.jime...@mdc-berlin.de:
>>>>> You are being told by who or what?  If it is a what then the exact
>>>>> message
>>>>> is helpful?
>>>> 
>>>> By my colleagues who are running a 2nd cluster using Univa GridEngine.
>>>> This
>>>> was a warning from Univa not to do it that way because it increases
>>>> qmaster
>>>> workload
>>> 
>>> I second queue increases the workload? I don't think that this is
>>> noticeable. We have several queues for several $TMPDIR settings and it
>>> wasn't a problem up to now. And I also don't see a high load on this
>>> machine because of this setting.
>>> 
>>> -- Reuti
>>> 
>>>> Juan
>>>> _______________________________________________
>>>> SGE-discuss mailing list
>>>> SGE-discuss@liv.ac.uk
>>>> https://arc.liv.ac.uk/mailman/listinfo/sge-discuss
>>> 
>>> _______________________________________________
>>> SGE-discuss mailing list
>>> SGE-discuss@liv.ac.uk
>>> https://arc.liv.ac.uk/mailman/listinfo/sge-discuss
> 
> -- 
> Marco Donauer | Senior Software Engineer & Support Manager
> Univa Corporation Regensburg/Germany
> E: mdona...@univa.com| D: +1.512.782.4453 | M: +49.151.466.396.92
> Toll-Free +1.800.370.5320 ext.1015
> W: Univa.com | S: support.univa.com | T: twitter.com/Grid_Engine
> 

_______________________________________________
SGE-discuss mailing list
SGE-discuss@liv.ac.uk
https://arc.liv.ac.uk/mailman/listinfo/sge-discuss

Reply via email to