It is currently better in Open MPI to
do a merge and use intracommunications.
Thanks,
Graham.
---
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI & Open MPI
sers mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Thanks,
Graham.
----------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MP
t why it hangs.. let you know as soon as I find
anything but right now I am testing using TCP.
Can you let me know the exact path and LD_LIBRARY_PATH your using on odin?
Thanks,
Graham.
--
Dr Graham E.
u get different answers on any of the nodes (other than their rank)
then we (I) have a problem!
Thanks,
Graham.
--
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, F
4
Cheers,
Doug Gregor
Thanks,
Graham.
------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tenne
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Thanks,
Graham.
--
Dr Graham E.
en-mpi.org/mailman/listinfo.cgi/users
Thanks,
Graham.
----------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd
ordering to
reduce 'stress' on interconnect switches.)
Thanks,
Graham.
---
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI & Open MPI
Compute
On Tue, 7 Feb 2006, Jean-Christophe Hugly wrote:
On Thu, 2006-02-02 at 21:49 -0700, Galen M. Shipman wrote:
I suspect the problem may be in the bcast,
ompi_coll_tuned_bcast_intra_basic_linear. Can you try the same run using
mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
wrote:
On Fri, 6 Jan 2006, Graham E Fagg wrote:
Looks like the problem is somewhere in the tuned collectives?
Unfortunately I need a logfile with exactly those :(
Carsten
I hope not. Carsten can you send me your configure line (not the whole
log) and any other things your set in your .mca
-2012302
eMail ckut...@gwdg.de
http://www.gwdg.de/~ckutzne
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Thanks,
Graham.
--
Dr Graham E
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: f...@cs.utk.edu | Phone:+1(
I am currently visiting HLRS/Stuttgart so I will try and call you in an
hour or so, if your leaving soon I can call you tomorrow morning?
Thanks,
Graham.
----------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Compu
previous email
to you I stated that one of the alltoalls is a sendrecv pairbased
implementation).
Carsten
Thanks,
Graham.
--
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab
14 matches
Mail list logo