Thanks, now all makes more sense to me. I'll try the hard way, multiple builds
for multiple envs ;)
Eric
Le dimanche 16 juillet 2006 18:21, Brian Barrett a écrit :
> On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
> > Now that I have that out of the way, I'd like to know how I am
> > suppos
On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
Now that I have that out of the way, I'd like to know how I am
supposed to compile my apps so that they can run on an homogenous
network with mpi. Here is an example:
kyron@headless ~/1_Files/1_ETS/1_Maitrise/MGL810/Devoir2 $ mpicc -L/
usr/
/me blushes in shame, it would seem that all I needed to do since the begining
was to run a make distclean. I apprantly had some old compiled files lying
around. Now I get:
kyron@headless ~/1_Files/1_ETS/1_Maitrise/MGL810/Devoir2 $ mpirun --hostfile
hostlist -np 4 uname -a
Linux headless 2.6.1
Brian,
Will do immediately, don't ask why I didn't think of doing this. I have
serious doubts that this would be an openmpi bug to start with since this is
a very common platform... But, as I said, this is a rather peculiar
environment and, maybe openmpi does something unionfs really do
On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote:
But, for some reason, on the Athlon node (in their image on the
server I should say) OpenMPI still doesn't seem to be built
correctly since it crashes as follows:
kyron@node0 ~ $ mpirun -np 1 uptime
Signal:11 info.si_errno:0(Success) si_co
Hello all,
I've been trying to set up a small test cluster with a dual Opteron
head and Athlon nodes. My environment in both cases is Gentoo and the nodes
boot off PXE using an image built and stored on the master node. I chroot into
the node's environment using:
linux32 chroot ${ROOT}