Do "which mpiexec" and look at the path. The options you show are from MPICH, 
not OMPI.

On Sep 25, 2014, at 12:15 AM, XingFENG <xingf...@cse.unsw.edu.au> wrote:

> Hi Ralph,
> 
> Thanks for your reply.
> 
> I am not pretty sure about the version of mpiexec. The documentation claims 
> that two mpi are installed, namely, OpenMPI and MPICH2.
> 
> On Thu, Sep 25, 2014 at 11:45 AM, Ralph Castain <r...@open-mpi.org> wrote:
> No, it doesn't matter at all for OMPI - any order is fine. The issue I see is 
> that your mpiexec isn't the OMPI one, but is from someone else. I have no 
> idea whose mpiexec you are using
> 
> 
> On Sep 24, 2014, at 6:38 PM, XingFENG <xingf...@cse.unsw.edu.au> wrote:
> 
>> I have found the solution. The command mpirun -machinefile ./my_hosts -n 3 
>> ./testMPI works. I think the order of arguments matters here.
>> 
>> On Thu, Sep 25, 2014 at 11:02 AM, XingFENG <xingf...@cse.unsw.edu.au> wrote:
>> Hi all,
>> 
>> I got problem with running program on a cluster.
>> I used the following command. my_hosts is a file containing 3 hosts while 
>> testMPI is a very simple MPI program.
>> ==========================================
>> mpirun -np 2 --hostfile ./my_hosts ./testMPI
>> mpirun -np 2 --machinefile ./my_hosts ./testMPI
>> mpirun -np 2 --f ./my_hosts ./testMPI
>> ==========================================
>> 
>> And the output is like this.
>> ==========================================
>> invalid "local" arg: --hostfile
>> 
>> usage:
>> mpiexec [-h or -help or --help]    # get this message
>> mpiexec -file filename             # (or -f) filename contains XML job 
>> description
>> mpiexec [global args] [local args] executable [args]
>>    where global args may be
>>       -l                           # line labels by MPI rank
>>       -bnr                         # MPICH1 compatibility mode
>>       -machinefile                 # file mapping procs to machines
>>       -s <spec>                    # direct stdin to "all" or 1,2 or 2-4,6 
>>       -1                           # override default of trying 1st proc 
>> locally
>>       -ifhn                        # network interface to use locally
>>       -tv                          # run procs under totalview (must be 
>> installed)
>>       -tvsu                        # totalview startup only
>>       -gdb                         # run procs under gdb
>>       -m                           # merge output lines (default with gdb)
>>       -a                           # means assign this alias to the job
>>       -ecfn                        # output_xml_exit_codes_filename
>>       -recvtimeout <integer_val>   # timeout for recvs to fail (e.g. from 
>> mpd daemon)
>>       -g<local arg name>           # global version of local arg (below)
>>     and local args may be
>>       -n <n> or -np <n>            # number of processes to start
>>       -wdir <dirname>              # working directory to start in
>>       -umask <umask>               # umask for remote process
>>       -path <dirname>              # place to look for executables
>>       -host <hostname>             # host to start on
>>       -soft <spec>                 # modifier of -n value
>>       -arch <arch>                 # arch type to start on (not implemented)
>>       -envall                      # pass all env vars in current environment
>>       -envnone                     # pass no env vars
>>       -envlist <list of env var names> # pass current values of these vars
>>       -env <name> <value>          # pass this value of this env var
>> mpiexec [global args] [local args] executable args : [local args] 
>> executable...
>> mpiexec -gdba jobid                # gdb-attach to existing jobid
>> mpiexec -configfile filename       # filename contains cmd line segs as lines
>>   (See User Guide for more details)
>> 
>> Examples:
>>    mpiexec -l -n 10 cpi 100
>>    mpiexec -genv QPL_LICENSE 4705 -n 3 a.out
>> 
>>    mpiexec -n 1 -host foo master : -n 4 -host mysmp slave
>> 
>> ==========================================
>> 
>> 
>> Another problem is that I cannot get the version of MPI. With command mpirun 
>> --version I got
>> 
>> ==========================================
>> invalid "local" arg: --version
>> 
>> usage:
>> mpiexec [-h or -help or --help]    # get this message
>> mpiexec -file filename             # (or -f) filename contains XML job 
>> description
>> mpiexec [global args] [local args] executable [args]
>>    where global args may be
>>       -l                           # line labels by MPI rank
>>       -bnr                         # MPICH1 compatibility mode
>>       -machinefile                 # file mapping procs to machines
>>       -s <spec>                    # direct stdin to "all" or 1,2 or 2-4,6 
>>       -1                           # override default of trying 1st proc 
>> locally
>>       -ifhn                        # network interface to use locally
>>       -tv                          # run procs under totalview (must be 
>> installed)
>>       -tvsu                        # totalview startup only
>>       -gdb                         # run procs under gdb
>>       -m                           # merge output lines (default with gdb)
>>       -a                           # means assign this alias to the job
>>       -ecfn                        # output_xml_exit_codes_filename
>>       -recvtimeout <integer_val>   # timeout for recvs to fail (e.g. from 
>> mpd daemon)
>>       -g<local arg name>           # global version of local arg (below)
>>     and local args may be
>>       -n <n> or -np <n>            # number of processes to start
>>       -wdir <dirname>              # working directory to start in
>>       -umask <umask>               # umask for remote process
>>       -path <dirname>              # place to look for executables
>>       -host <hostname>             # host to start on
>>       -soft <spec>                 # modifier of -n value
>>       -arch <arch>                 # arch type to start on (not implemented)
>>       -envall                      # pass all env vars in current environment
>>       -envnone                     # pass no env vars
>>       -envlist <list of env var names> # pass current values of these vars
>>       -env <name> <value>          # pass this value of this env var
>> mpiexec [global args] [local args] executable args : [local args] 
>> executable...
>> mpiexec -gdba jobid                # gdb-attach to existing jobid
>> mpiexec -configfile filename       # filename contains cmd line segs as lines
>>   (See User Guide for more details)
>> 
>> Examples:
>>    mpiexec -l -n 10 cpi 100
>>    mpiexec -genv QPL_LICENSE 4705 -n 3 a.out
>> 
>>    mpiexec -n 1 -host foo master : -n 4 -host mysmp slave
>> 
>> 
>> ==========================================
>> 
>> Any help would be greatly appreciated!
>> 
>> 
>> -- 
>> Best Regards.
>> ---
>> Xing FENG
>> PhD Candidate
>> Database Research Group
>> 
>> School of Computer Science and Engineering
>> University of New South Wales
>> NSW 2052, Sydney
>> 
>> Phone: (+61) 413 857 288
>> 
>> 
>> 
>> -- 
>> Best Regards.
>> ---
>> Xing FENG
>> PhD Candidate
>> Database Research Group
>> 
>> School of Computer Science and Engineering
>> University of New South Wales
>> NSW 2052, Sydney
>> 
>> Phone: (+61) 413 857 288
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2014/09/25386.php
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/09/25387.php
> 
> 
> 
> -- 
> Best Regards.
> ---
> Xing FENG
> PhD Candidate
> Database Research Group
> 
> School of Computer Science and Engineering
> University of New South Wales
> NSW 2052, Sydney
> 
> Phone: (+61) 413 857 288
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/09/25388.php

Reply via email to