I believe this -should- work, but can't verify it myself. The most important
thing is to be sure you built with --enable-heterogeneous or else it will
definitely fail.
Ralph
On 4/10/08 7:17 AM, "Rolf Vandevaart" wrote:
>
> On a CentOS Linux box, I see the following:
>
>> grep 113 /usr/inclu
Hi Jody
Simple answer - the 1.2.x series does not support multiple hostfiles. I
believe you will find that documented in the FAQ section.
What you have to do here is have -one- hostfile that includes all the hosts,
and then -host each app-context to indicate which of those hosts are to be
used fo
On a CentOS Linux box, I see the following:
> grep 113 /usr/include/asm-i386/errno.h
#define EHOSTUNREACH113 /* No route to host */
I have also seen folks do this to figure out the errno.
> perl -e 'die$!=113'
No route to host at -e line 1.
I am not sure why this is happening, but you
Rolf,
I was able to run hostname on the two noes that way,
and also a simplified version of my testprogram (without a barrier)
works. Only MPI_Barrier shows bad behaviour.
Do you know what this message means?
[aim-plankton][0,1,2][btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]
conne
This worked for me although I am not sure how extensive our 32/64
interoperability support is. I tested on Solaris using the TCP
interconnect and a 1.2.5 version of Open MPI. Also, we configure with
the --enable-heterogeneous flag which may make a difference here. Also
this did not work fo
i narrowed it down:
The majority of processes get stuck in MPI_Barrier.
My Test application looks like this:
#include
#include
#include "mpi.h"
int main(int iArgC, char *apArgV[]) {
int iResult = 0;
int iRank1;
int iNum1;
char sName[256];
gethostname(sName, 255);
MPI_I
Hi
Using a more realistic application than a simple "Hello, world"
even the --host version doesn't work correctly
Called this way
mpirun -np 3 --host aim-plankton ./QHGLauncher
--read-config=pureveg_new.cfg -o output.txt : -np 3 --host aim-fanta4
./QHGLauncher_64 --read-config=pureveg_new.cfg -o