Surely this is the problem of the scheduler that your system uses,
rather than MPI?


On Wed, 2010-03-03 at 00:48 +0000, abc def wrote:
> Hello,
> 
> I wonder if someone can help.
> 
> The situation is that I have an MPI-parallel fortran program. I run it
> and it's distributed on N cores, and each of these processes must call
> an external program.
> 
> This external program is also an MPI program, however I want to run it
> in serial, on the core that is calling it, as if it were part of the
> fortran program. The fortran program waits until the external program
> has completed, and then continues.
> 
> The problem is that this external program seems to run on any core,
> and not necessarily the (now idle) core that called it. This slows
> things down a lot as you get one core doing multiple tasks.
> 
> Can anyone tell me how I can call the program and ensure it runs only
> on the core that's calling it? Note that there are several cores per
> node. I can ID the node by running the hostname command (I don't know
> a way to do this for individual cores).
> 
> Thanks!
> 
> ====
> 
> Extra information that might be helpful:
> 
> If I simply run the external program from the command line (ie, type
> "/path/myprogram.ex <enter>"), it runs fine. If I run it within the
> fortran program by calling it via
> 
> CALL SYSTEM("/path/myprogram.ex")
> 
> it doesn't run at all (doesn't even start) and everything crashes. I
> don't know why this is.
> 
> If I call it using mpiexec:
> 
> CALL SYSTEM("mpiexec -n 1 /path/myprogram.ex")
> 
> then it does work, but I get the problem that it can go on any core. 
> 
> ______________________________________________________________________
> Do you want a Hotmail account? Sign-up now - Free
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to