out we
> can patch the release branch.
Actually you do...:-)
Please let me know if you ever intend to use that system. I am now
letting someone else use it, but it can be shared.
--
Jean-Christophe Hugly
PANTA
the
difference visible at 32 is pretty small. So, it is application
dependent, no question about it, but small-msg rdma is beneficial below
a given (application-dependent) cluster size.
--
Jean-Christophe Hugly
PANTA
number of anecdotal
reports I got. It may well be that in some situations, small-msg rdma is
better only for 2 nodes, but that's note such a likely scenario; reality
is sometimes linear (at least at our scale :-) ) after all.
The scale threshold could be tunable, couldnt it ?
--
Jean-Christophe Hugly
PANTA
hine in
micro-benchmarks is important, even if it means using an ad-hoc tuning.
There is some justification for it after all. There are small clusters
out there (many more than big ones, in fact) so taking maximum advantage
of a small scale is relevant.
When do you plan on having the small-msg rdma option available ?
J-C
--
Jean-Christophe Hugly
PANTA
of 0.5 us).
Thanks, guys. I'll stop worrying about that then !
--
Jean-Christophe Hugly
PANTA
e opposite (which was my
initial expectation, actually). May be I just misunderstood the whole
set of tunables. My understanding was that messages under the eager
limit would never be rdma'd by definition, and that the others would or
would not be, depending on the min_rdma_size.
--
Jean-Christophe Hugly
PANTA
x27;l settle for
1.5 :-) )
Any advice ?
--
Jean-Christophe Hugly
PANTA
tant to us. Not
only are we very much interrested by ompi's multi-rail feature, but also
we use IB for other things than MPI and spread the load over the two
ports.
Is there a special way of configuring ompi for it to work properly with
multiple ports ?
--
Jean-Christophe Hugly
PANTA
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> Is it possible for you to get a stack trace where this is hanging?
>
> You might try:
>
>
> mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
> 2 -d xterm -e gdb PMB-MPI1
>
>
I did that, and when it was hanging
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> By using slots=4 you are telling Open MPI to put the first 4
> processes on the "bench1" host.
> Open MPI will therefore use shared memory to communicate between the
> processes not Infiniband.
Well, actually not, unless I'm mistaken
d
> note that you said you configured both with and without threads but
> try the configure on a fresh source, not on one that had previously
> been configured with thread support.
I rebuilt everything from fresh src (took the oppotunity to refresh).
Same behaviour...
Am I the only on
ing ?
--
Jean-Christophe Hugly
PANTA
max-slots=4
Am I doing something obviously wrong ?
Thanks for any help !
--
Jean-Christophe Hugly
PANTA
13 matches
Mail list logo