I am working on SGE and I need some precise definition of some terms. These
are the terms I often hear about, but does not really understand them.
These are
Head Node
Login Node
Compute Node
Submit Host
Cluster Nodes
Master Host
Any insight would be helpful
Thanks
Varun
___
Hello Everyone,
Till now I have been writing simple scripts to do my work. Now I want to
automate my task.
I have no idea about submitting array jobs
So here is the explanation of what I want to do and then you can help me
how to design the script.
In a directory I have these 4 fastq files
A-122
ld2] fail to open the
BAM file.[E::main_mem] fail to open file `_P1_'.Usage: samtools sort [-on]
[-m ] [main_samview] fail to open "sam" for
reading.open: No such file or directory[bam_index_build2] fail to open the
BAM file.*
Sorry for troubling you on this
Help!!
Regards
Varun
HI Everyone,
I have been using SGE for some time but I still have lots of doubts. I hope
to find my solution here.
When I submit my job using qsub and using arguments like -l mf=8G does -l
mf=8G provides me 8GB ram for that job to run??
This job which i Submit runs on one computer or several comp
Hi,
I am working on SGE. I want to find out the available nodes on my cluster
and associated memory with each node so that I can submit my jobs
accordingly.
What command should I use
Regards
Varun
___
users mailing list
users@gridengine.org
https://grid
Hi Everyone,
I have started to understand the basics of SGE, but still I am pretty new
in this cluster computation.
When I qsub an application, and use qstat to check the status of my
submitted job I see that my job has been submitted to a compute node
something like this *biology@compute-0-18.loc
Thanks for the reply, I am looking at qhost command now.
Also a compute node *biology@compute-0-18.local* would mean one of the
several computers on the cluster??
Regards
On Tue, May 6, 2014 at 11:48 AM, Jesse Becker wrote:
> On Tue, May 06, 2014 at 11:34:50AM -0400, VG wrote:
>
Hi,
I am working on SGE.
When I ssh to connect to cluster it connects me to the head node. Then I
use qlogin to get onto one of the compute nodes/ or lets say one of the
host machine.
To my understanding I think when I qlogin, the compute node which is
provided to me , I can use all the resources
HI everyone,
I am trying to make a job array script. I have 2 files in my working
directory namely abc.fastq and def.fastq
I want to run one command on these 2 files(later on I will include more
.fastq files) on different compute nodes. So I made this array script, but
it only produces output for
Reuti wrote:
> Hi,
>
> Am 02.09.2015 um 19:17 schrieb VG:
>
> > HI everyone,
> >
> > I am trying to make a job array script. I have 2 files in my working
> directory namely abc.fastq and def.fastq
> >
> > I want to run one command on these 2 files(later o
Hi Everyone,
I submitted 200 array jobs on the cluster using -t option. My command line
looks like this:
qsub -t 1-200 -cwd -j y -b y -N jobs -l h_vmem=30G ./script.sh
After this, job numbered 3,10,45,56,98,134 failed to finish.
What can I do to only run the failed job now? Can I use -t option i
Hi,
I am trying to run a command on the terminal and also submit it to the
cluster but I am getting different results.
When I type on the terminal this :
for i in *_1.fastq.gz; do echo $i >> t.txt; zcat $i | grep
"GCTGGCAGAAGGTAACATG" >> t.txt ; echo >> t.txt ; done
I get the output like
t;
>
>
>
>
> On Mon, Jul 9, 2018 at 1:23 PM, VG wrote:
>
>> Hi,
>> I am trying to run a command on the terminal and also submit it to the
>> cluster but I am getting different results.
>>
>> When I type on the terminal this :
>>
>> for i
I have a scripting question regarding submitting jobs to the cluster.
There is a limitation per user of 1000 jobs only.
Let's say I have 1200 tar.gz files
I tried to submit all the jobs together but after 1000 jobs it gave me an
error message saying per user limit is 1000 and after that it did n
or
>> per-project basis. I don't know that you can limit on jobs in RQSs but you
>> certainly can on slots; the sge_resource_quota(5) man page has some
>> examples.
>>
>> On Thu, Jun 13, 2019 at 12:32:51PM -0400, VG wrote:
>> > I have a scripting
ar -xzf
>> $i"
>> done
>>
>> On Thu, Jun 13, 2019 at 12:39 PM Skylar Thompson wrote:
>>
>>> We've used resource quota sets to accomplish that on a per-queue or
>>> per-project basis. I don't know that you can limit on jobs in RQSs but
>&g
Hi,
Where should I put my qsub command?
On Thu, Jun 13, 2019 at 12:54 PM VG wrote:
> Hi Daniel,
> Will give it a try. If I am not mistaken, there should be another *done *in
> the code snippet.
>
> Regards
> Varun
>
> On Thu, Jun 13, 2019 at 12:45 PM Daniel Povey
t all the tar.gz in one folder and run array script
as you told, other wise is there a work around where everything runs and
also remains in the respective directories.
Thanks for your help.
Regards
Varun
On Thu, Jun 13, 2019 at 1:11 PM Joshua Baker-LePain wrote:
> On Thu, 13 Jun 2019 a
VARUN
On Thu, Jun 13, 2019 at 1:46 PM Feng Zhang wrote:
> You can try to write the script to first scan all the files to get their
> full path names and then run the Array jobs.
>
>
> On Jun 13, 2019, at 1:20 PM, VG wrote:
>
> HI Joshua,
> I like the array job option be
19 matches
Mail list logo