Hi Paul,Thank you for the explanations. Actually this was not the main point of the question asked. I think we can close the discussions. The main point was: why a job is running more efficiently using sbatch than salloc.Thank you all for the contributions. - MikeSent from my iPhoneOn Jul 5, 2023,
Mike,
I think your definitions are probably in the minority on this list.
To be clear, I am *not* saying you (or SGE) are wrong, just that the folk
here use different terms for what you are asking for.
I think of it like dialects of English where the same food might be a
"cookie" or a "biscuit" de
Thank you Loris, for the further feedback.
“Reasonable” for SGE is within a few minutes, would be nice if it could be
adjusted.
Still interactive means the user has almost immediate access to the system, not
queued.
Sent from my iPhone
> On Jul 5, 2023, at 9:43 AM, Loris Bennett wrote:
>
Mike Mikailov writes:
> Thank you Loris, for the further clarifications. The only question is
> who will wait forever in interactive mode? And how practical is it?
>
> Interactive mode is what its name implies - interactive, not queueing.
To me, "interactive" is the alternative to "batch" - queu
Thank you Loris, for the further clarifications. The only question is who will
wait forever in interactive mode? And how practical is it?
Interactive mode is what its name implies - interactive, not queueing.
It would make more sense if the default setting for deadline would be set to a
reasona
Mike Mikailov writes:
> About the last point. In the case of sbatch the jobs wait in the queue as
> long as it takes until the resources are available. In the case of
> interactive jobs
> (at least using Son of Grid Engine) they fail after a short time if no
> resources available.
But you we
Nodes for salloc could also be allowed to be oversubscribed or overloaded. There are a number of tools that can be used to study task performance bottlenecks on HPC clusters. Some of these tools include:SLURM Profiler: The SLURM Profiler is a tool that can be used to collect performance data for SL
About the last point. In the case of sbatch the jobs wait in the queue as long
as it takes until the resources are available. In the case of interactive jobs
(at least using Son of Grid Engine) they fail after a short time if no
resources available.
Sent from my iPhone
> On Jul 4, 2023, at 9:
Performance depends only on the worker node capabilities along with IO. If the worker nodes are the same then may be the nodes under salloc use network drive (main storage) for IO which may slow down the tasks.There are many tools available to localize the bottleneck in the task performance. You ma
Mike Mikailov writes:
> They should not affect the task performance.
>
> May be the cluster configuration allocated slow machines for salloc.
>
> salloc and sbatch have different purposes:
>
> * salloc is used to allocate a set of resources to a job. Once the resources
> have been allocated, th
Thank you for your answer! And if slurm workers are identical, what can be
the reason? Can interactive mode affect the performance? I have submitted
the task with the help of "srun {{ name_of_task }} --pty bash", and the
result is the same as for launching with salloc. Thanks in advance!
вт, 4 ию
They should not affect the task performance.
May be the cluster configuration allocated slow machines for salloc.
salloc and sbatch have different purposes:
salloc is used to allocate a set of resources to a job. Once the resources have
been allocated, the user can run a command or script on t
Hello! I have question about way of launching tasks in Slurm. I use the
service in cloud and submit an application with sbatch or salloc. As far as
I am concerned, the commands are similar: they allocate resources for
counting users tasks and run them. However, I have received different
results in
13 matches
Mail list logo