Thanks for your answer, it explains a lot.
I thought this was possible because it's possible using LSF without MPI,
but I guess not.
Thread closed?

Thanks,
Michal Zielinski

On Tue, Sep 9, 2014 at 11:22 AM, Chrysovalantis Paschoulas <
c.paschou...@fz-juelich.de> wrote:

>  Hi!
>
> The answer to your question is no, it is not possible.
>
> A task (Linux process) is running on a single node only and can
> allocate/use only the available CPUs on that node. You need an MPI job (or
> Hybrid MPI+OpenMP) in order to utilize CPUs on different nodes for a single
> purpose/application.
>
> Best Regards,
> Chrysovalantis Paschoulas
>
>
> On 09/09/2014 05:14 PM, Michal Zielinski wrote:
>
>   Phil,
>
>  I believe that 1 core per node is correct.
>
>  Maybe let me ask this first: is possible for a single task to use
> specific CPUs across several nodes?
>
>  Thanks,
>  Mike
>
> On Tue, Sep 9, 2014 at 10:38 AM, Eckert, Phil <ecke...@llnl.gov> wrote:
>
>>  Mike,
>>
>>  In your slurm.conf you have Procs=1, (which is the same as CPUS=1) and
>> Sockets (if ommited will be inferred from CPUS, default is 1) and
>> CoresPerSocket (default is 1)
>>
>>  So at this point the slurm.conf has a default configuration of 1 core
>> per node.
>>
>>  Phil Eckert
>> LLNL
>>
>>   From: Michal Zielinski <michal.zielin...@uconn.edu>
>> Reply-To: slurm-dev <slurm-dev@schedmd.com>
>> Date: Tuesday, September 9, 2014 at 6:35 AM
>> To: slurm-dev <slurm-dev@schedmd.com>
>> Subject: [slurm-dev] Re: "Requested node configuration is not available"
>> when using -c
>>
>>     Josh,
>>
>>  I believe that *-n *sets the number of tasks. I only want a single
>> task, as when a single process uses multiple cores. *srun -n 2 hostname*
>> returns
>>
>>  linux-slurm2
>>  linux-slurm3
>>
>>  which is definitely not what I want.
>>
>> Thanks,
>>  Mike
>>
>>
>> On Mon, Sep 8, 2014 at 8:07 PM, Josh McSavaney <mcsa...@csh.rit.edu>
>> wrote:
>>
>>>  I believe your slurm.conf is defining 4 nodes with a single logical
>>> processor each. You are then trying to allocate two CPUs on a single node
>>> with srun, which (according to your slurm.conf) you do not have.
>>>
>>>  You may want to consider `srun -n 2 hostname` and see where that lands
>>> you.
>>>
>>>  Regards,
>>>
>>>  Josh McSavaney
>>> Bit Flipper
>>> Rochester Institute of Technology
>>>
>>>
>>>
>>> On Mon, Sep 8, 2014 at 7:42 PM, Christopher Samuel <
>>> sam...@unimelb.edu.au> wrote:
>>>
>>>>
>>>> On 09/09/14 07:26, Michal Zielinski wrote:
>>>>
>>>>  I have a small test cluster (node[1-4]) running slurm 14.03.0 setup
>>>>> with
>>>>> CR_CPU and no usage restrictions. Each node has just 1 CPU.
>>>>>
>>>> [...]
>>>>
>>>>> But, *srun -c 2 hostname* does not work, and it returns the above
>>>>> error.
>>>>>
>>>>> I have no idea why I can't dedicate 2 cores to a single job if I can
>>>>> dedicate each core individually to a job.
>>>>>
>>>>
>>>> What does "scontrol show node" say?
>>>>
>>>> cheers,
>>>> Chris
>>>> --
>>>>  Christopher Samuel        Senior Systems Administrator
>>>>  VLSCI - Victorian Life Sciences Computation Initiative
>>>>  Email: sam...@unimelb.edu.au Phone: +61 (0)3 903 55545
>>>>  http://www.vlsci.org.au/      http://twitter.com/vlsci
>>>>
>>>
>>>
>>
>
>
>
>
> ------------------------------------------------------------------------------------------------
>
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
>
> ------------------------------------------------------------------------------------------------
>
> ------------------------------------------------------------------------------------------------
>
>

Reply via email to