Hi
For nodes with GPUs I've configured a gpu.q queue and a forced complex.
The same execution hosts are included in a low priority queue for
regular cpu jobs not needing a gpu so no forced complex.
What I'm aiming for is that jobs submitted to gpu.q require
'-l gpu=[012]' but jobs submitted to l
It works!
I found out that I needed to update the default_duration from INFINITY to
something big.
Woohoo!
Thanks anyway.
Mich
2014-07-16 11:12 GMT-04:00 Jesse Becker :
> On Wed, Jul 16, 2014 at 10:47:25AM -0400, François-Michel L'Heureux wrote:
>
>> I simply want to disable backfilling to have
Another thing you could do, if you have access to the accounting file
or db from the nodes, is to call qacct -j from the
completion_job and capture the 'failed' and 'exit_status' fields.
This way you can tell if a job failed or succeed even if the job
crashed and didn't produce any error output.
Hallo,
is it also possible to submit to use regular expressions with the -q option?
E.g. qsub -q xxx.q@node[1-4] ?
Juryk
-Original Message-
From: Reuti [mailto:re...@staff.uni-marburg.de]
Sent: Dienstag, 25. März 2014 09:40
To: Henrichs, Juryk
Cc: users@gridengine.org
Subject: Re: [grid
On Wed, Jul 16, 2014 at 10:47:25AM -0400, François-Michel L'Heureux wrote:
I simply want to disable backfilling to have a pure FIFO queue. I found
that thread
http://comments.gmane.org/gmane.comp.clustering.gridengine.users/17148.
From it I understand that it can't be done, but it was 5 years ago
Hi!
I simply want to disable backfilling to have a pure FIFO queue. I found
that thread
http://comments.gmane.org/gmane.comp.clustering.gridengine.users/17148.
>From it I understand that it can't be done, but it was 5 years ago.
To have pure FIFO we currently set all nodes to slots=1 so we end up
I missed your second email to Txema, but I think a simple, if less
than elegant method is to redirect stdout and stderr of the pipeline
jobs to a directory. Each output file and error file can be
identified by its job id or task id. Then, in your clean up code you
can sweep through each directory
You can submit a job that has a hold placed on it based on your
pipeline, whose only purpose is to email you when your pipeline
finishes.
qsub -m e -M -hold_jid $(qsub -terse pipeline_job) completion_job
or, with an array job, which submits job ids as
.1-:?, you can hold on the whole array job
Hi Txema,
My point is not to disable them but how to get the notification by using a
different "transport" other than email. I would like that information to a
file or a socket.
p
On Wed, Jul 16, 2014 at 11:55 AM, Txema Heredia
wrote:
> Hi Paolo,
>
> you can disable mails on all but the very
Hi Paolo,
you can disable mails on all but the very last job of the pipeline by
using -m b|e|a|s|n.
There has been discussions on the list on mechanisms to send all emails
to the local linux user (not an email address) and send "packages of
mails" every day or so, but I can't remember any de
Hi,
SGE can send job notification via email by using the -M command line
option.
This useful when you are submitting few jobs but not for complex pipeline
crunching thousand of jobs.
I'm wondering if SGE can send these notifications by using other mechanism
e.g. writing to a file, socket, http,
11 matches
Mail list logo