Steve,

I spotted a strange value for the mpi_yield_when_idle MCA parameter. 1 means your processor is oversubscribed, and this trigger a call to sched_yield after each check on the SM. Are you running the job oversubscribed? If not it looks like somehow we don't correctly identify that there are multiple cores ...

  george.


On Apr 7, 2009, at 14:31 , Steve Kargl wrote:

On Tue, Apr 07, 2009 at 09:10:21AM -0700, Eugene Loh wrote:
Steve Kargl wrote:

I can rebuild 1.2.9 and 1.3.1. Is there any particular configure
options that I should enable/disable?

I hope someone else will chime in here, because I'm somewhat out of
ideas.  All I'm saying is that 10-usec latencies on sm with 1.3.0 or
1.3.1 are out of line with what other people see and I don't think it's
simply a 1.2.9/1.3.0 issue here.  I'm stumped.

With 1.3.2 pre-release, I ran

/usr/local/openmpi-1.3.2/bin/mpiexec --mca btl sm,self \
--mca mpi_show_mca_params all -machinefile mf_ompi_2 -n 2 ./z | & tee sgk.log

I've placed a file with the output from '--mca mpi_show_mca_params all' at

http://troutmask.apl.washington.edu/~kargl/mca_all_params.txt

Perhaps, someone with more knowledge of the parameters can take a
quick look.  I do observe

[node20.cimu.org:90002] btl_sm_bandwidth=900 (default value)
[node20.cimu.org:90002] btl_sm_latency=100 (default value)

Are these values tunable?

--
Steve
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to