On Sat, 2019-06-29 at 16:27 +0000, Renfro, Michael wrote:
> Is this output file being written to a central file server that can be 
> accessed from your submit host? If so, start another ssh session from your 
> local computer to the submit host.
> 
> Is the output file being written to a location only accessible from the 
> compute node running your job? You might be able to ssh from the submit host 
> to the compute node (or maybe from your local computer to the compute node).
> 
> > On Jun 29, 2019, at 10:07 AM, Valerio Bellizzomi <vale...@selnet.org> wrote:
> > 
> >> On Sat, 2019-06-29 at 07:57 -0700, Brian Andrus wrote:
> >> I believe you are referring to an interactive terminal window.
> >> 
> >> You can do that with srun --pty bash
> >> 
> >> Windows themselves are not handled by slurm at all. To have multiple
> >> windows is a function of your workstation. You would need multiple
> >> connections to the cluster (eg: multiple ssh windows with multiple ssh
> >> connections)
> >> 
> >> In each ssh session, execute 'srun --pty' and you will have an
> >> interactive session on the cluster. You would, of course, need any other
> >> options for partitions, timelimit, etc.
> >> 
> >> 
> >> That being said, this is usually NOT the way to approach a solution via
> >> a cluster. Clusters are meant to be something that does all the work for
> >> you while you are away (hence the batch concept). You likely want to
> >> look at getting your code to run without human interference and send it
> >> off to do so.
> > 
> > Sorry, I am only trying to look at the output from my program, it is a
> > batch computation program that outputs some text line.
> > 
> > srun --pty does not open another window.
> > 
> 

no I am using the option --unbuffered to watch the output in a terminal
window.



Reply via email to