It happily supports multiple compilers on the same system, but not in
the way you mean. You need another installation of OMPI (in,
say, /usr/lib64/mpi/intel) for icc/ifort.
Select by path manipulation.
On Mon, 2008-10-20 at 08:19 +0800, Wen Hao Wang wrote:
> Hi all:
>
> I have openmpi 1.2.5 ins
I think the quoting counts as my own fault as I used
MCA='--mca btl_openib_verbose 1 --mca btl openib,self --mca btl_openib_if_include
"mlx4_0:1,mlx4_1:1"'
...
mpirun ... $MCA ...
DM
On Sun, 19 Oct 2008, Jeff Squyres wrote:
On Oct 18, 2008, at 9:17 PM, Mostyn Lewis wrote:
I traced this and
Well, here's what I see with the IMB PingPong test using two ConnectX DDR cards
in each of 2 machines. I'm just quoting the last line at 10 repetitions of
the 4194304 bytes.
Scali_MPI_Connect-5.6.4-59151: (multi rail setup in /etc/dat.conf)
#bytes #repetitions t[usec] Mbytes/sec
Hi all:
I have openmpi 1.2.5 installed on SLES10 SP2. These packages should be
compiled with gcc compilers. Now I have installed Intel C++ and Fortran
compilers on my cluster. Can openmpi use Intel compilers withour
recompiling?
I tried to use environment variable to indicate Intel compiler, but
Hi, all:
I have one cluster without Internet connection. I want to test OpenMPI
functions on it. It seems MTT can not be used. Do I have any other choice
for the testing?
I have tried lamtest. "make -k check" gave a lot of IB related warnings,
indicating that my dat.conf file contained invalid
Jeff Squyres wrote:
On Oct 18, 2008, at 9:19 PM, Mostyn Lewis wrote:
Can OpenMPI do like Scali and MVAPICH2 and utilize 2 IB HCAs per machine
to approach double the bandwidth on simple tests such as IMB PingPong?
Yes. OMPI will automatically (and aggressively) use as many active
ports as y
On Oct 18, 2008, at 9:19 PM, Mostyn Lewis wrote:
Can OpenMPI do like Scali and MVAPICH2 and utilize 2 IB HCAs per
machine
to approach double the bandwidth on simple tests such as IMB PingPong?
Yes. OMPI will automatically (and aggressively) use as many active
ports as you have. So you s
On Oct 18, 2008, at 9:17 PM, Mostyn Lewis wrote:
I traced this and it was the quote marks in "mlx4_0:1,mlx4_1:1" -
they were
passed in and caused the mismatch :-(
Doh! I totally missed that you were using quotes.
Is using quotes something that you would expect to be able to do? I
didn't
Working with a CellBlade cluster (QS22), the requirement is to have one
instance of the executable running on each socket of the blade (there are 2
sockets). The application is of the 'domain decomposition' type, and each
instance is required to often send/receive data with both the remote blades