Hello all,

I am using mpi4py in an optimization code that iteratively spawns an MPI
analysis code (fortran-based) via "MPI.COMM_SELF.Spawn" (I gather that this
is not an ideal use for comm spawn but I don't have too many other options
at this juncture). I am calling "child_comm.Disconnect()" on the parent
side and "call MPI_COMM_DISCONNECT(parent, ier)" on the child side.

After a dozen or so iterations, it would appear I am running up against the
system limit for number of open pipes:

[affogato:05553] [[63653,0],0] ORTE_ERROR_LOG: The system limit on number
of pipes a process can open was reached in file odls_default_module.c at
line 689
[affogato:05553] [[63653,0],0] usock_peer_send_blocking: send() to socket
998 failed: Broken pipe (32)
[affogato:05553] [[63653,0],0] ORTE_ERROR_LOG: Unreachable in file
oob_usock_connection.c at line 316

>From this Stackoverflow post
<http://stackoverflow.com/questions/20698712/mpi4py-close-mpi-spawn> I have
surmised that the opened pipes remain open on mpiexec despite no longer
being used. I know I can increase system limits, but this will only get me
so far as I intend to perform hundreds if not thousands of iterations. Is
there a way to dynamically close the unused pipes on either the python or
fortran side? Also, I've seen the "mca parameter" mentioned in regards to
this topic. I don't fully understand what that is, but will setting it have
an effect on this issue?

Thank you,
Austin

-- 
*Austin Herrema*
PhD Student | Graduate Research Assistant | Iowa State University
Wind Energy Science, Engineering, and Policy | Mechanical Engineering
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to