Hi Jeff,

as written in my original post, I'm using a custom build of 4.0.0 which
was configured with nothing more than a --prefix and
--enable-mpi-fortran. I checked for updates and it appears that there
was an issue until 4.0.1 with oversubscription. The changelog states

> - Fix a problem with the ORTE rmaps_base_oversubscribe MCA paramater.

Using --mca rmaps_base_oversubscribe 1 on the command line works with
the 4.0.1 version. I added an entry in the openmpi-mca-params.conf file
and I can now use over-subscribing calls of mpirun without any
additional arguments aside from the number of processes.

Regards, Steffen

On 16/04/2019 19.48, Jeff Squyres (jsquyres) via users wrote:
> Steffen --
> 
> What version of Open MPI are you using?
> 
> 
>> On Apr 16, 2019, at 9:21 AM, Steffen Christgau <christ...@cs.uni-potsdam.de> 
>> wrote:
>>
>> Hi Tim,
>>
>> it helps, up to four processes. But it has two drawbacks. 1) Using more
>> cores/threads than the machine provides  (so the actual
>> over-subscription) is still not possible.  2) it still requires an
>> additional command line argument.
>>
>> What I'd like to see is that a call of mpirun with an arbitrary number
>> of processes that just works without any other command line options.
>> However, an environment variable would be acceptable.
>>
>> MPICH's (v3.3) mpirun of a plain installation with no further configure
>> options (I think it uses the Hydra PM) just does what I want, but MPICH
>> is not always an option ;-)
>>
>> Regards, Steffen
>>
>> On 16/04/2019 14.56, Tim Jim wrote:
>>> Hi Steffen,
>>>
>>> I'm not sure if this will help you (I'm by far no expert) but the
>>> mailing group pointed by to using: 
>>> mpirun --use-hwthread-cpus
>>>
>>> to solve something similar.
>>>
>>> Kind regards,
>>> Tim
>>>
>>>
>>> On Tue, 16 Apr 2019 at 19:01, Steffen Christgau
>>> <christ...@cs.uni-potsdam.de <mailto:christ...@cs.uni-potsdam.de>> wrote:
>>>
>>>    Hi everyone,
>>>
>>>    on my 2 cores/4 threads development platform I want to start programs
>>>    via mpirun with over-subscription enabled by default. I have some
>>>    external packages which have tests that use more than 2 processes. They
>>>    all fail because Open MPI refuses them to run due to over-subscription.
>>>    I know that I can enable over-subscription with mpirun --oversubscribe
>>>    and that works well, but that would require to modify the packages'
>>>    autotools files or the generated Makefiles with I found hardly
>>>    convenient.
>>>
>>>    I also tried two of the RMAPS MCA parameters:
>>>
>>>     - rmaps_base_no_oversubscribe
>>>     - rmaps_base_oversubscribe
>>>
>>>    (btw, are they redundant? Having two of them and one is the negation of
>>>    the other is quite confusing. The description in ompi_info reads quite
>>>    similar for the two.)
>>>
>>>    $ mpirun --mca rmaps_base_no_oversubscribe 0 --mca
>>>    rmaps_base_oversubscribe 1 -n 4 hostname
>>>    
>>> --------------------------------------------------------------------------
>>>    There are not enough slots available in the system to satisfy the 4
>>>    slots that were requested by the application:
>>>      hostname
>>>
>>>    Either request fewer slots for your application, or make more slots
>>>    available for use.
>>>    
>>> --------------------------------------------------------------------------
>>>
>>>    Setting the environment variables (OMPI_MCA_rmaps_...) did not help
>>>    either. The same goes or the FAQ [1]:
>>>
>>>    $ cat > my-hostfile
>>>    localhost
>>>    $ mpirun -np 4 --hostfile my-hostfile hostname
>>>    
>>> --------------------------------------------------------------------------
>>>    There are not enough slots available in the system to satisfy the 4
>>>    slots...
>>>
>>>    My Open MPI installation is a private build of version 4.0.0, configured
>>>    with nothing more than ./configure --prefix=/some/where
>>>    --enable-mpi-fortran
>>>
>>>    How can I allow over-subscription by default? I am aware about
>>>    performance implications, but it's only used for testing and
>>>    development. I am not using a resource manager on that machine.
>>>
>>>    Thanks in advance
>>>
>>>    Regards, Steffen
>>>
>>>    [1] https://www.open-mpi.org/faq/?category=running#oversubscribing
>>>
>>>    _______________________________________________
>>>    users mailing list
>>>    users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>    https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>>
>>>
>>> -- 
>>>
>>>     
>>>
>>> *Timothy Jim
>>> */PhD Researcher in Aerospace/
>>>
>>> Creative Flow Research Division,
>>> Institute of Fluid Science, Tohoku University
>>>
>>> www.linkedin.com/in/timjim/ <http://www.linkedin.com/in/timjim/>
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> 
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to