On 07/30/2014 06:33 PM, Reuti wrote:
Am 17.07.2014 um 16:22 schrieb Pierre Lindenbaum:
OK, it seems to be a known bug : I was redirect to that post today:
https://arc.liv.ac.uk/trac/SGE/ticket/1420?cversion=0&cnum_hist=1
It's just exported. But did you set it to a script or alike beforehand
Hi Reuti,
That's interesting, but it works without any hack:
{
namedefault_per_user
enabled true
description "Each user entitles to resources equivalent to
three nodes"
limit users {*} queues {all.q} to slots=192,h_vmem=1536G
}
Then it consumes from user
Am 17.07.2014 um 16:22 schrieb Pierre Lindenbaum:
> OK, it seems to be a known bug : I was redirect to that post today:
>
> https://arc.liv.ac.uk/trac/SGE/ticket/1420?cversion=0&cnum_hist=1
>
>
> I'm now trying to set the variable QRSH_WRAPPER
> How does it look like ? Is there any example for
Hi,
I did this successfully with 6.2u5p2 a few years ago. The crucial points were:
- the build infrastructure for solaris was apparently geared towards the
Solaris Studio compilers. I needed Solaris Studio cc and CC in my PATH.
- the Berkeley DB (if needed) must be built with CC="cc -m64" an
Hi,
Am 08.07.2014 um 12:43 schrieb Tina Friedrich:
> ...and turning schedd_job_info on for a bit also didn't really help; it gives
> me "cannot run in PE "smp" because it only offers 0 slots"; however, it
> doesn't really tell me why it thinks there aren't any free slots (I think
> there are).
Hi,
at the moment I'm trying
GE2011.11p1 / 2012-07-09
SGE6.2u5p2 / 2011-04-05
eg.
# /Installer/OpenGridScheduler/SGE6.2u5p2/source # ./aimk -no-secure
-spool-classic -gcc -no-java -no-jni -no-remote -debug
Building in directory: /Installer/OpenGridScheduler/SGE6.2u5p2/source
making in SOLAR
Am 10.07.2014 um 19:51 schrieb Bob Tupper:
> I have a requirement that I'm not sure how to fulfill and was hoping to get
> some suggestions.
>
> each job requires
> 8 cores on a single machine, 1 license and 128G of RAM(most of my
> execution hosts are 16 cores 256G RAM)
> 4 or more si
Hi,
Am 30.07.2014 um 14:13 schrieb Kraus, Niki:
> at the moment I'm deeply convinced that I use the wrong compiler version, GNU
> gcc 4.9.0 , it's simply too new.
>
> At http://mirror.opencsw.org/opencsw/allpkgs/?bcsi_scan_b0797d08ba117e08=0
>
> there is a list of archived CSW GNU compilers, a
Hi,
Am 30.06.2014 um 08:55 schrieb Derrick Lin:
> A typical node on our cluster has 64 cores and 512GB memory. So it's about
> 8GB/core. Occasionally, we have some jobs that utilizes only 1 core but
> 400-500GB of memory, that annoys lots of users.
This is a general question, how you judge a n
at the moment I'm deeply convinced that I use the wrong compiler version, GNU
gcc 4.9.0 , it's simply too new.
At http://mirror.opencsw.org/opencsw/allpkgs/?bcsi_scan_b0797d08ba117e08=0
there is a list of archived CSW GNU compilers, any idea which one to take?
gcc4core-4.6.1,REV=2011.09.01-SunO
Hi,
Am 03.07.2014 um 16:46 schrieb Dan Hyatt:
> I have been trying to configure SoGe with qmon, with varying levels of
> success. I am somewhat ambivalent whether I configure it command line or
> qmon.
> I have been reading the docs online, with some success.
>
> The two that are currently es
11 matches
Mail list logo