That strange. Are you sure the btl mca variable is not being set through
an environment variable or though an MCA parameter file? You should be
able to tell from the output of ompi_info -a.
BTW, you do no need to specify both sm and vader. vader is a newer
shared memory btl that will likely repla
Running OpenMPI 1.8.4 one application running on 16 cores of a single
node
takes over an hour compared to just 7 minutes for MPICH. If I use
--mca btl vader,sm,self it runs in the same 7 minutes as MPICH. If I throw
in
the tcp and openib btl's it also runs quickly, so it seems to just not be