The connection between nodes should be TCP/IP.  I am currently using
Open MPI 1.4.1.

I've attached the output of ompi_info as a text file.


Thanks!


Charles
On 2/2/2010 11:46 AM, Shiqing Fan wrote:
>
> Hi Charles,
>
> It seems not a WMI problem, because the remote orted has already been
> launched, and only that part was done by WMI.
>
> What connection do you have between the nodes, tcp? Could you provide
> the version information of Open MPI, or just the output of ompi_info?
> so that I can take a close look.
>
>
> Thanks,
> Shiqing
>
>
> Charles Shuller wrote:
>> No messages on the command prompt.
>>
>> When I executed mpirun to launch notepad on the remote machine, It
>> crashes again.
>>
>> No information is ever printed to the command line unless I enter a bad
>> password.
>>
>> The very first time I attempt to use mpirun to launch a process on the
>> remote machine, I get an indefinite hang (I let it run for several hours
>> yesterday).   Subsequently I get an abend dialog in about 3 seconds.
>>
>> My MPI application (Which just calls init and finalize)  is at C:\bin
>> which is in the system path on both machines, this is also the bin
>> directory for the openmpi package.
>>
>> Is there anyway I can turn on logging, or do I need to go through and
>> insert debug statements myself and recompile?
>>
>>
>>
>> Thanks!
>>
>>
>> Charles
>>
>> On 2/2/2010 11:17 AM, Shiqing Fan wrote:
>>  
>>> Hi Charles,
>>>
>>> On the local machine, which can be considered also as the "head node",
>>> no orted will be launched, the mpirun itself works as it locally.
>>>
>>> Did you see any error message on the command prompt? That would be
>>> very helpful.
>>>
>>> To do a simple test, just try to launch notepad on remote node: mpirun
>>> -np 1 -host host1 notepad.exe ,  and this will do the same thing as
>>> you run the wmic command line.
>>>
>>> If that works, it might mean that, you didn't copy your MPI
>>> application onto the remote node, which should present in the same
>>> path as on all working nodes, for example, on working nodes, your
>>> application could be placed at: D:\tests\app\app.exe (You have to do
>>> so, because the wmi impersonate level doesn't support network share
>>> yet. I'm still hacking on this to break the limit.). And then you can
>>> run the mpirun command line supplied with the application full path or
>>> directly run it under the application path.
>>>
>>>
>>> Regards,
>>> Shiqing
>>>
>>>
>>>
>>> Charles Shuller wrote:
>>>    
>>>> Thanks Shiqing!
>>>>
>>>> Unfortunately, it still doesn't work, but I've got more info.
>>>>
>>>> I can use wmic to start an application on the remote machine, but
>>>> that application does not start in the current login process
>>>> (notepad.exe starts, but I have to ask task manager to show all
>>>> processes to find it, even though I'm currently logged in as the same
>>>> user).  I believe this is expected behavior, please let me know if
>>>> it's not.
>>>>
>>>> When using mpirun, I can verify that orted starts on the remote
>>>> machine, but the crash or hang appears to happen before the
>>>> application starts execution.   Oddly, orted does not appear to start
>>>> on the local machine.  Logs all refer to mpirun crashing.
>>>>
>>>>
>>>> Cheers!
>>>>
>>>> Charles
>>>>
>>>> On 1/29/2010 2:56 AM, Shiqing Fan wrote:
>>>>      
>>>>> Hi Charles,
>>>>>
>>>>> You don't need to install anything, but just a few security setting
>>>>> has to be correctly configured. Here are two links might be helpful
>>>>> (will be added into README.WINDOWS too):
>>>>> http://msdn.microsoft.com/en-us/library/aa393266(VS.85).aspx
>>>>> http://community.spiceworks.com/topic/578
>>>>>
>>>>> On the other hand, in order to check if WMI is working between the
>>>>> nodes, you can try with command:
>>>>>
>>>>>     C:\>wmic /node:192.168.0.1 /user:username process call create
>>>>> notepad.exe
>>>>>
>>>>> the ip has to be the remote computer ip address, and the user name
>>>>> is which you use on remote computer. This command line will simply
>>>>> launch a non-interactive notepad (no GUI) on remote node using WMI,
>>>>> if it is successful, you should be able to see a notepad process in
>>>>> Task Manager or Process Viewer, and that also means mpirun will work
>>>>> through WMI.
>>>>>
>>>>> Could you check with the above command, and possibly tell me the
>>>>> return value, so that I can help you to make it work.
>>>>>
>>>>>
>>>>> Regards,
>>>>> Shiqing
>>>>>
>>>>>
>>>>> Charles Shuller wrote:
>>>>>        
>>>>>> When attempting to launch an application on both local and remote
>>>>>> windows7 hosts, mpirun either hangs indefinately or abends.
>>>>>>
>>>>>> The application executes correctly on both machines, when only
>>>>>> launched
>>>>>> on a single host.
>>>>>>
>>>>>> I believe mpirun is using WMI, README.WINDOWS indicates that this
>>>>>> is the
>>>>>> case if I don't have the CCP toolkit and SDK installed, which I
>>>>>> don't.  Additionally, I have encountered and resolved some security
>>>>>> issues
>>>>>> following this assumption.
>>>>>>
>>>>>> Any advice is welcome.  I'm not married to WMI, so if the
>>>>>> solution is
>>>>>> "install something else" I'm great with that.
>>>>>>
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Charles
>>>>>>   _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>           
>>>>>         
>>>     
>>
>>   
>
>

                 Package: Open MPI cshuller@CHARLES-LT Distribution
                Open MPI: 1.4.1
   Open MPI SVN revision: r22421
   Open MPI release date: Jan 14, 2010
                Open RTE: 1.4.1
   Open RTE SVN revision: r22421
   Open RTE release date: Jan 14, 2010
                    OPAL: 1.4.1
       OPAL SVN revision: r22421
       OPAL release date: Jan 14, 2010
            Ident string: 1.4.1
                  Prefix: C:/
 Configured architecture: x86 Windows-6.1
          Configure host: CHARLES-LT
           Configured by: cshuller
           Configured on: 10:59 AM Sat 01/30/2010 
          Configure host: CHARLES-LT
                Built by: cshuller
                Built on: 10:59 AM Sat 01/30/2010 
              Built host: CHARLES-LT
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: no
      Fortran90 bindings: no
 Fortran90 bindings size: na
              C compiler: cl
     C compiler absolute: cl
            C++ compiler: cl
   C++ compiler absolute: cl
      Fortran77 compiler: CMAKE_Fortran_COMPILER-NOTFOUND
  Fortran77 compiler abs: none
      Fortran90 compiler: 
  Fortran90 compiler abs: none
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: no
     Fortran90 profiling: no
          C++ exceptions: no
          Thread support: no
           Sparse Groups: no
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: no
   Heterogeneous support: no
 mpirun default --prefix: yes
         MPI I/O support: yes
       MPI_WTIME support: gettimeofday
Symbol visibility support: yes
   FT Checkpoint support: yes  (checkpoint thread: no)
           MCA backtrace: none (MCA v2.0, API v2.0, Component v1.4.1)
           MCA paffinity: windows (MCA v2.0, API v2.0, Component v1.4.1)
               MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.1)
           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.1)
               MCA timer: windows (MCA v2.0, API v2.0, Component v1.4.1)
         MCA installdirs: windows (MCA v2.0, API v2.0, Component v1.4.1)
         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.1)
         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA crs: none (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.1)
              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.1)
           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.1)
           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.1)
                MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.1)
                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.1)
                MCA coll: self (MCA v2.0, API v2.0, Component v1.4.1)
                MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.1)
                MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.1)
               MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.1)
               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA btl: self (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.1)
                MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.1)
                MCA odls: process (MCA v2.0, API v2.0, Component v1.4.1)
               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.1)
               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA rml: ftrm (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.1)
              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.1)
              MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA plm: process (MCA v2.0, API v2.0, Component v1.4.1)
              MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA ess: env (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.1)
                 MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.1)
             MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.1)

<<attachment: charles_shuller.vcf>>

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to