Yes it works. Information provided by mpirun is confusing but I get the right 
syntax now. Thank you!

F

 

On Feb 26, 2014, at 12:34 PM, tmish...@jcity.maeda.co.jp wrote:
> Hi, this help message might be just a simple mistake.
> 
> Please try: mpirun -np 20 --map-by ppr:5:socket -bind-to core osu_alltoall
> 
> There's no available explanation yet as far as I know, because it's still
> alfa version.
> 
> Tetsuya Mishima
> 
>> Dear all,
>> 
>> I am playing with Open MPI 1.7.5 and with the "--map-by" option but I am
> not sure I am doing thing correctly despite I am following the instruction.
> Here what I got
>> 
>> $mpirun -np 20 --npersocket 5 -bind-to core osu_alltoall
>> 
> --------------------------------------------------------------------------
>> The following command line options and corresponding MCA parameter have
>> been deprecated and replaced as follows:
>> 
>> Command line options:
>> Deprecated:  --npersocket, -npersocket
>> Replacement: --map-by socket:PPR=N
>> 
>> Equivalent MCA parameter:
>> Deprecated:  rmaps_base_n_persocket, rmaps_ppr_n_persocket
>> Replacement: rmaps_base_mapping_policy=socket:PPR=N
>> 
>> The deprecated forms *will* disappear in a future version of Open MPI.
>> Please update to the new syntax.
>> 
> --------------------------------------------------------------------------
>> 
>> 
>> after changing according to the instructions I see
>> 
>> $ mpirun -np 24 --map-by socket:PPR=5 -bind-to core osu_alltoall
>> 
>> 
> --------------------------------------------------------------------------
>> The mapping request contains an unrecognized modifier:
>> 
>> Request: socket:PPR=5
>> 
>> Please check your request and try again.
>> 
> --------------------------------------------------------------------------
>> [tesla49:30459] [[29390,0],0] ORTE_ERROR_LOG: Bad parameter in file
> ess_hnp_module.c at line 510
>> 
> --------------------------------------------------------------------------
>> It looks like orte_init failed for some reason; your parallel process is
>> likely to abort.  There are many reasons that a parallel process can
>> fail during orte_init; some of which are due to configuration or
>> environment problems.  This failure appears to be an internal failure;
>> here's some additional information (which may only be relevant to an
>> Open MPI developer):
>> 
>> orte_rmaps_base_open failed
>> --> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
>> 
> --------------------------------------------------------------------------
>> 
>> 
>> 
>> Is there any place where the new syntax is explained?
>> 
>> Thanks in advance
>> F
>> 
>> --
>> Mr. Filippo SPIGA, M.Sc. - HPC  Application Specialist
>> High Performance Computing Service, University of Cambridge (UK)
>> http://www.hpc.cam.ac.uk/ ~ http://filippospiga.me ~ skype: filippo.spiga
>> 
>> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
>> 
>> *****
>> Disclaimer: "Please note this message and any attachments are
> CONFIDENTIAL and may be privileged or otherwise protected from disclosure.
> The contents are not to be disclosed to anyone other than the
>> addressee. Unauthorized recipients are requested to preserve this
> confidentiality and to advise the sender immediately of any error in
> transmission."
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Mr. Filippo SPIGA, M.Sc.
http://www.linkedin.com/in/filippospiga ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*****
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


Reply via email to