Hi Eugene

Thank you for answering one of my original questions.

However, there seems to be a problem with the syntax.
Is it really "-mca btl btl_sm_num_fifos=some_number"?
(FYI, I am using Open MPI 4.1.2, a tarball from two days ago.)

When I grep any component starting with btl_sm I get nothing:

ompi_info --all | grep btl_sm
(No output)


When I try to run with it, it fails telling me it cannot
find the btl_sm_num_fifos component:


mpiexec -mca btl sm,self -mca btl btl_sm_num_fifos=4 -np 4 ./a.out
--------------------------------------------------------------------------
A requested component was not found, or was unable to be opened.  This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded).  Note that
Open MPI stopped checking at the first component that it did not find.

Host:      spinoza.ldeo.columbia.edu
Framework: btl
Component: btl_sm_num_fifos=4
--------------------------------------------------------------------------

Thank you,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------


Eugene Loh wrote:
Ralph Castain wrote:

Yo Gus

Just saw a ticket go by reminding us about continuing hang problems on shared memory when building with gcc 4.4.x - any chance you are in that category? You might have said something earlier in this thread....
Going back to the original e-mail in this thread:

Gus Correa wrote:

Use -mca btl -mca btl_sm_num_fifos=some_number ? (Which number?)

Another experiment to try would be to keep sm on, but try changing btl_sm_num_fifos as above. The number to use would be the number of processes on the node. E.g., if all processes are running on the same box, just use the same number as processes in the job. The results might help narrow down the possibilities here.

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to