Thanks, now all makes more sense to me. I'll try the hard way, multiple builds
for multiple envs ;)
Eric
Le dimanche 16 juillet 2006 18:21, Brian Barrett a écrit :
> On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
> > Now that I have that out of the way, I'd like to know how I am
> > suppos
On Jul 16, 2006, at 4:13 PM, Eric Thibodeau wrote:
Now that I have that out of the way, I'd like to know how I am
supposed to compile my apps so that they can run on an homogenous
network with mpi. Here is an example:
kyron@headless ~/1_Files/1_ETS/1_Maitrise/MGL810/Devoir2 $ mpicc -L/
usr/
/me blushes in shame, it would seem that all I needed to do since the begining
was to run a make distclean. I apprantly had some old compiled files lying
around. Now I get:
kyron@headless ~/1_Files/1_ETS/1_Maitrise/MGL810/Devoir2 $ mpirun --hostfile
hostlist -np 4 uname -a
Linux headless 2.6.1
Brian,
Will do immediately, don't ask why I didn't think of doing this. I have
serious doubts that this would be an openmpi bug to start with since this is
a very common platform... But, as I said, this is a rather peculiar
environment and, maybe openmpi does something unionfs really do
On Jul 14, 2006, at 10:35 AM, Warner Yuen wrote:
I'm having trouble compiling Open MPI with Mac OS X v10.4.6 with
the Intel C compiler. Here are some details:
1) I upgraded to the latest versions of Xcode including GCC 4.0.1
build 5341.
2) I installed the latest Intel update (9.1.027) as w
On Jul 15, 2006, at 2:58 PM, Eric Thibodeau wrote:
But, for some reason, on the Athlon node (in their image on the
server I should say) OpenMPI still doesn't seem to be built
correctly since it crashes as follows:
kyron@node0 ~ $ mpirun -np 1 uptime
Signal:11 info.si_errno:0(Success) si_co
On Jul 16, 2006, at 6:12 AM, Keith Refson wrote:
The compile of openmpi 1.1 was without problems and
appears to have correctly built the GM btl.
$ ompi_info -a | egrep "\bgm\b|_gm_"
MCA mpool: gm (MCA v1.0, API v1.0, Component v1.1)
MCA btl: gm (MCA v1.0, API v1.0
I'm trying out openmpi for the first time on
a cluster of dual AMD Opterons with Myrinet
interconnect using GM. There are two outstanding
but possibly connected problems, (a) how to interact
correctly with the LSF job manager and (2) how to
use the gm interconnect.
The compile of openmpi 1.1 wa