For the /proc/self you need to start an interactive job under Slurm.

(I'm speaking from a PBSPro viewpoint here.
What? What?  Maud - release the dogs! Fetch my shotgun! Get off my property
Sir!)

On 15 August 2017 at 05:15, Lachlan Musicman <[email protected]> wrote:

> On 15 August 2017 at 11:38, Christopher Samuel <[email protected]>
> wrote:
>
>> On 15/08/17 09:41, Lachlan Musicman wrote:
>>
>> > I guess I'm not 100% sure what I'm looking for, but I do see that there
>> > is a
>> >
>> > 1:name=systemd:/user.slice/user-0.slice/session-373.scope
>> >
>> > in /proc/self/cgroup
>>
>> Something is wrong in your config then. It should look something like:
>>
>> 4:cpuacct:/slurm/uid_3959/job_6779703/step_9/task_1
>> 3:memory:/slurm/uid_3959/job_6779703/step_9/task_1
>> 2:cpuset:/slurm/uid_3959/job_6779703/step_9
>> 1:freezer:/slurm/uid_3959/job_6779703/step_9
>>
>> for /proc/${PID_OF_PROC}/cgroup
>>
>> I notice you have /proc/self - that will be the shell you are running in
>> for your SSH session and not the job!
>>
>
> Oh, that explains more.
>
> Now it looks like:
>
> 2:hugetlb:/
> 11:rdma:/
> 10:perf_event:/
> 9:cpu,cpuacct:/
> 8:cpuset:/slurm/uid_1506/job_1998/step_batch
> 7:pids:/
> 6:freezer:/slurm/uid_1506/job_1998/step_batch
> 5:net_cls,net_prio:/
> 4:devices:/system.slice
> 3:blkio:/
> 2:memory:/
> 1:name=systemd:/system.slice/slurmd.service
>
> I seem to have a lot of guff in there that I don't need?
>
> L.
>
>
> ------
> "The antidote to apocalypticism is apocalyptic civics. Apocalyptic civics
> is the insistence that we cannot ignore the truth, nor should we panic
> about it. It is a shared consciousness that our institutions have failed
> and our ecosystem is collapsing, yet we are still here — and we are
> creative agents who can shape our destinies. Apocalyptic civics is the
> conviction that the only way out is through, and the only way through is
> together. "
>
> Greg Bloom @greggish https://twitter.com/greggish/
> status/873177525903609857
>

Reply via email to