I'm losing it today, I just now noticed I sent mx_info for the wrong
nodes ...
// node-1
$ mx_info
MX Version: 1.1.6
MX Build: ggrobe@juggernaut:/home/ggrobe/Tools/mx-1.1.6 Thu Nov 30
14:17:44 GMT 2006
1 Myrinet board installed.
The MX driver is configured to support up to 4 instances and 1024 no
Ah, sorry about that ...
$ ./mx_info
MX Version: 1.1.6
MX Build: ggrobe@juggernaut:/home/ggrobe/Tools/mx-1.1.6 Thu Nov 30
14:17:44 GMT 2006
1 Myrinet board installed.
The MX driver is configured to support up to 4 instances and 1024 nodes.
=
As for the MTL, there is a bug in the MX
MTL for v1.2 that has been fixed, but after 1.2b2 ...
oops, i was stupidly assuming he already had that fix. yes, this is an
important fix...
-reese
Sorry to jump into the discussion late. The mx btl does not support
communication between processes on the same node by itself, so you
have to include the shared memory transport when using MX. This will
eventually be fixed, but likely not for the 1.2 release. So if you do:
mpirun --pr
Ompi failing on mx only> I've attached the ompi_info from node-1 and node-2.
thanks, but i need "mx_info", not "ompi_info" ;-)
But now that you mention mapper, I take it that's what SEGV_MAPERR might
be referring to.
this is an ompi red herring; it has nothing to do with Myrinet mapping, even
About the -x, I've been trying it both ways and prefer the latter, and
results for either are the same. But it's value is correct. I've
attached the ompi_info from node-1 and node-2. Sorry for not zipping
them, but they were small and I think I'd have firewall issues.
$ mpirun --prefix /usr/local
I had configured the hostfile located at
~prefix/etc/openmpi-default-hostfile.
I copied the file to bernie-3, and it worked...
Now, at the cluster I was working at the Universidad de Los Andes
(Venezuela) -I decided to install mpi on three machines I was able to put
together as a personal proyect
I'm just curious, maybe I missed something in a past post of this
thread, but ... Are these nodes diskless? If so, then you will have to
make sure that these same paths are exported to the diskless nodes and
handle non-interactive sessions as well as the init shell scripts
properly. It's easiets if
Ompi failing on mx onlyHi, Gary-
This looks like a config problem, and not a code problem yet. Could you send
the output of mx_info from node-1 and from node-2? Also, forgive me
counter-asking a possibly dumb OMPI question, but is "-x LD_LIBRARY_PATH"
really what you want, as opposed to "-x LD
On 1/2/07, Gurhan Ozen wrote:
On 1/2/07, jcolmena...@ula.ve wrote:
> > First you should make sure that PATH and LD_LIBRARY_PATH are defined
> > in the section of your .bashrc file that get parsed for non
> > interactive sessions. Run "mpirun -np 1 printenv" and check if PATH
> > and LD_LIBRARY_
it is executable
bernie@bernie-1:~/proyecto$ ls -l prueba.bin
-rwxr-xr-x 1 bernie bernie 9619 2007-01-02 12:18 prueba.bin
On 1/2/07, jcolmena...@ula.ve wrote:
> First you should make sure that PATH and LD_LIBRARY_PATH are defined
> in the section of your .bashrc file that get parsed for non
> interactive sessions. Run "mpirun -np 1 printenv" and check if PATH
> and LD_LIBRARY_PATH have the values you expect.
in fa
> First you should make sure that PATH and LD_LIBRARY_PATH are defined
> in the section of your .bashrc file that get parsed for non
> interactive sessions. Run "mpirun -np 1 printenv" and check if PATH
> and LD_LIBRARY_PATH have the values you expect.
in fact they do:
bernie@bernie-1:~/proyecto$
I was initially using 1.1.2 and moved to 1.2b2 because of a hang on
MPI_Bcast() which 1.2b2 reports to fix, and seemed to have done so. My
compute nodes are 2 dual core xeons on myrinet with mx. The problem is
trying to get ompi running on mx only. My machine file is as follows ...
node-1 slots=4
First you should make sure that PATH and LD_LIBRARY_PATH are defined
in the section of your .bashrc file that get parsed for non
interactive sessions. Run "mpirun -np 1 printenv" and check if PATH
and LD_LIBRARY_PATH have the values you expect.
For your second question you should give the p
Jeff,
Thanks for the reply, that has fixed the problem. The code in
questions appears to have only been ran with mpich and mpich
derivatives in the past.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jan 2, 2007, at 9:56 AM, Jeff Squyres wrote:
Brock --
I
I installed openmpi 1.1.2 on two 686 boxes runing ubuntu 6.10.
Followed the instructions given in the FAQ. Nevertheless, I get the
following message:
[bernie-1:05053] ERROR: A daemon on node 192.168.1.113 failed to start as
expected.
[bernie-1:05053] ERROR: There may be more information available
Yikes - that's not a good error. :-(
We don't regularly build / test on AIX, so I don't have much
immediate guidance for you. My best suggestion at this point would
be to try the latest 1.2 beta or nightly snapshot. We did an update
of the event engine (the portion of the code that you'r
Brock --
I think your test program is faulty. For MPI_CART_CREATE, you need
to pass in an array indicating whether the dimensions are periodic or
not -- it is not sufficient to pass in a scalar logical value.
For example, the following program seems to work fine for me:
program cart
inclu
Welcome back from the holidays! I'll try to catch up on the right-
before-the-holidays e-mail this today...
On Dec 21, 2006, at 6:07 PM, Dennis McRitchie wrote:
I am trying to build openmpi so that mpicc does not require me to
set up
the compiler's environment, and so that any executables b
20 matches
Mail list logo