Thank you for the quick response. Your suggested commands did not work with the network interface disabled or unplugged. I still get:
[SAXM4WIN:02124] [[20996,1],0] tcp_peer_send_blocking: send() to socket 12 failed: Transport endpoint is not connected (128) So, in spite of including --mca oob ^tcp, OMPI still wants to see a connected port somewhere on the system. Do you have any other suggestions? The whole command is as follows: .\orterun.exe --mca oob ^tcp --mca btl self, sm -n 2 ./program Many Thanks, Mike... On 02/26/2018 12:45 PM, r...@open-mpi.org wrote: > There are a couple of problems here. First the “^tcp,self,sm” is telling OMPI > to turn off all three of those transports, which probably leaves you with > nothing. What you really want is to restrict to shared memory, so your param > should be “-mca btl self,sm”. This will disable all transports other than > shared memory - note that you always must enable the “self” btl. > > Second, you likely also need to ensure that the OOB isn’t trying to use tcp, > so add “-mca oob ^tcp” to your cmd line. It shouldn’t be active anyway, but > better safe. > > >> On Feb 26, 2018, at 9:14 AM, Michael A. Saverino >> <michael.saverino....@nrl.navy.mil> wrote: >> >> >> I am running the v-1.10.7 OMPI package that is available via the Cygwin >> package manager. I have a requirement to run my OMPI application >> standalone on a Windows/Cygwin system without any network connectivity. >> If my OMPI system is not connected to the network, I get the following >> errors when I try to run my OMPI application: >> >> [SAXM4WIN:02124] [[20996,1],0] tcp_peer_send_blocking: send() to socket >> 12 failed: Transport endpoint is not connected (128) >> [SAXM4WIN:02124] [[20996,1],0] tcp_peer_send_blocking: send() to socket >> 12 failed: Transport endpoint is not connected (128) >> >> I have tried the following qualifiers in my OMPI command to no avail: >> >> --mca btl ^tcp,self,sm >> >> So the question is, am I able to disable TCP networking, either via >> command line or code, if I only plan to use cores on a single machine >> for OMPI execution? >> >> Many Thanks, >> >> Mike... >> >> >> >> -- >> Michael A.Saverino >> Contractor >> Senior Engineer, Information Technology Division >> Code 5522 >> Naval Research Laboratory >> W (202)767-5652 >> C (814)242-0217 >> https://www.nrl.navy.mil/itd/ncs/ >> >> >> _______________________________________________ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Michael A.Saverino Contractor Senior Engineer, Information Technology Division Code 5522 Naval Research Laboratory W (202)767-5652 C (814)242-0217 https://www.nrl.navy.mil/itd/ncs/ _______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users