This is very helpfull, I will try to obtain a system wired for dual port
in order to correct this.
Thanks,
Galen
On Tue, 7 Feb 2006, Jean-Christophe Hugly wrote:
On Thu, 2006-02-02 at 21:49 -0700, Galen M. Shipman wrote:
I suspect the problem may be in the bcast,
ompi_coll_tuned_bcast_i
On Thu, 2006-02-02 at 21:49 -0700, Galen M. Shipman wrote:
>
> I suspect the problem may be in the bcast,
> ompi_coll_tuned_bcast_intra_basic_linear. Can you try the same run using
>
> mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
> 2 -mca coll self,basic -d xterm -e g
In an attempt to limit runtime dependencies, I am using static libraries
where possible. Under OSX (10.4.4) I get the following error when I try to
link my application:
/usr/bin/ld: multiple definitions of symbol _munmap
/usr/lib/gcc/powerpc-apple-darwin8/4.0.1/../../../libSystem.dylib(munmap.So)
Hi all,
I was wondering if it would be possible to use the same scheduling for
"alltoallv" as for "alltoall". If one assumes the messages of roughly
the same size, then "alltoall" would not be an unreasonable
approximation for "alltoallv". As is, it appears that in v1.1
"alltoallv" is done via a
Hello dear Andreas,
On Tuesday 07 February 2006 13:51, Andreas Fladischer wrote:
> i have a question to the parallel mpirun!i have a small cluster (for
> testing purpose one headnode and one node) which running fedora core 3!
> i installed openmpi on both nodes and created a user with the same uid
On Feb 6, 2006, at 5:25 PM, Warner Yuen wrote:
Brian help!! :-)
On Feb 5, 2006, at 9:00 AM, users-requ...@open-mpi.org wrote:
If this is the case, my next question is, how do I supply the usual
xgrid options, such as working directory, standard input file, etc?
Or is that simply not possi
Hello,
Can you give us more details on the problem? The exact error message,
as well as the contents of your hostfile will help. You should check
out our FAQ as well, as it likely will help you solve your problem:
http://www.open-mpi.org/faq/
Particularly the sections 'Running MPI jobs' an
hi@all!
i have a question to the parallel mpirun!i have a small cluster (for
testing purpose one headnode and one node) which running fedora core 3!
i installed openmpi on both nodes and created a user with the same uid
on both nodes; now i try to build the glibc tools from the headnode but
t