Re: [OMPI users] Multi-rail support

2010-03-25 Thread PN
A quick question. Do I need to configure different IP for both IB ports before running mpirun? Or configure an IP and bond both IB ports? Or simply configure one IP for ib0 is enough? Thanks a lot. PN 2010/3/25 Rolf Vandevaart > They will automatically be used by the library. There

Re: [OMPI users] Strange behaviour of SGE+OpenMPI

2009-04-01 Thread PN
t-280r-0 150 =>more Job1.o199 >>> 200 >>> [burl-ct-280r-2:22132] ras:gridengine: JOB_ID: 199 >>> [burl-ct-280r-2:22132] ras:gridengine: PE_HOSTFILE: >>> /ws/ompi-tools/orte/sge/sge6_2u1/default/spool/burl-ct-280r-2/active_jobs/199.1/pe_hostfile >&

Re: [OMPI users] Strange behaviour of SGE+OpenMPI

2009-04-01 Thread PN
tfile > [..snip..] > [burl-ct-280r-2:22132] ras:gridengine: burl-ct-280r-0: PE_HOSTFILE shows > slots=2 > [burl-ct-280r-2:22132] ras:gridengine: burl-ct-280r-1: PE_HOSTFILE shows > slots=2 > [..snip..] > burl-ct-280r-1 > burl-ct-280r-1 > burl-ct-280r-0 > burl-ct-280r-0 &

Re: [OMPI users] Strange behaviour of SGE+OpenMPI

2009-03-31 Thread PN
i-gcc/xhpl Any hint to debug this situation? Also, if I have 2 IB ports in each node, which IB bonding was done, will Open MPI automatically benefit from the double bandwidth? Thanks a lot. Best Regards, PN 2009/4/1 Rolf Vandevaart > On 03/31/09 11:43, PN wrote: > >> Dear all, >>

[OMPI users] Strange behaviour of SGE+OpenMPI

2009-03-31 Thread PN
example, the hostname of IB become node0001-clust and node0002-clust, will Open MPI automatically use the IB interface? How about if I have 2 IB ports in each node, which IB bonding was done, will Open MPI automatically benefit from the double bandwidth? Thanks a lot. Best Regards, PN