Hi,
Am 09.04.2008 um 22:17 schrieb Danesh Daroui:
Mark Kosmowski skrev:
Danesh:
Have you tried "mpirun -np 4 --hostfile hosts hostname" to verify
that
ompi is working?
When I run "mpirun -np 4 --hostfile hosts hostname" same thing happens
and it just hangs. Can it be a clue?
Can you re
HI
In my network i have some 32 bit machines and some 64 bit machines.
With --host i successfully call my application:
mpirun -np 3 --host aim-plankton -x DISPLAY ./run_gdb.sh ./MPITest :
-np 3 --host aim-fanta4 -x DISPLAY ./run_gdb.sh ./MPITest64
(MPITest64 has the same code as MPITest, but was
Hi
Using a more realistic application than a simple "Hello, world"
even the --host version doesn't work correctly
Called this way
mpirun -np 3 --host aim-plankton ./QHGLauncher
--read-config=pureveg_new.cfg -o output.txt : -np 3 --host aim-fanta4
./QHGLauncher_64 --read-config=pureveg_new.cfg -o
i narrowed it down:
The majority of processes get stuck in MPI_Barrier.
My Test application looks like this:
#include
#include
#include "mpi.h"
int main(int iArgC, char *apArgV[]) {
int iResult = 0;
int iRank1;
int iNum1;
char sName[256];
gethostname(sName, 255);
MPI_I
This worked for me although I am not sure how extensive our 32/64
interoperability support is. I tested on Solaris using the TCP
interconnect and a 1.2.5 version of Open MPI. Also, we configure with
the --enable-heterogeneous flag which may make a difference here. Also
this did not work fo
Rolf,
I was able to run hostname on the two noes that way,
and also a simplified version of my testprogram (without a barrier)
works. Only MPI_Barrier shows bad behaviour.
Do you know what this message means?
[aim-plankton][0,1,2][btl_tcp_endpoint.c:572:mca_btl_tcp_endpoint_complete_connect]
conne
On a CentOS Linux box, I see the following:
> grep 113 /usr/include/asm-i386/errno.h
#define EHOSTUNREACH113 /* No route to host */
I have also seen folks do this to figure out the errno.
> perl -e 'die$!=113'
No route to host at -e line 1.
I am not sure why this is happening, but you
Hi,
I found an archive email with the same basic error I am running into for
mpi 1.2.6, unfortunately other then the question and request for the
output, there was not an email response on how it was solved.
the error
../../../opal/.libs/libopen-pal.so: undefined reference to
`lt_libltdlc_LTX
Hi,
If I configure openmpi with "-enable-mpi-profile" option:
1) Once build is complete, how do I specify profile name and
location in the "mpirun" command? Do I have to set any flags with the
"mpirun" command to view profile?
2) If vampire trace by default is built with openmp
thanks for reporting the bug, it is fixed on the trunk. The problem was
this time not in the algorithm, but in the checking of the
preconditions. If recvcount was zero and the rank not equal to the rank
of the root, than we did not even start the scatter, assuming that there
was nothing to do.
I think you're expect something that the MPI profiling interface is
not supposed to provide you. There is no tool to dump or print any
profile information by default (and it is not mandated by the
standard). What this option does, is compile the profiling interface
(as defined by the MPI st
Edgar --
Can you file a CMR for v1.2?
On Apr 10, 2008, at 8:10 AM, Edgar Gabriel wrote:
thanks for reporting the bug, it is fixed on the trunk. The problem
was
this time not in the algorithm, but in the checking of the
preconditions. If recvcount was zero and the rank not equal to the
rank
done...
Jeff Squyres wrote:
Edgar --
Can you file a CMR for v1.2?
On Apr 10, 2008, at 8:10 AM, Edgar Gabriel wrote:
thanks for reporting the bug, it is fixed on the trunk. The problem
was
this time not in the algorithm, but in the checking of the
preconditions. If recvcount was zero and the
Well, as a quick hack, you can try adding --disable-dlopen to the
configure line. It will disable the building of individual components
(instead linking them into the main shared libraries). It means that you
have to be slightly more careful about which components you build, but in
practice u
But if openmpi is installed, I can automatically instrument my
application with Vampir (ie I don't have to install vtf separately -
right?)
And I can view with Vampir Trace the results of my app's parallel run?
-Original Message-
From: George Bosilca [mailto:bosi...@eecs.utk.edu]
Sent:
Thanks Rueti. It works now. I just disabled firewall on all machines
since Open-MPI uses random port each time.
Thanks again!
Danesh
Reuti skrev:
Hi,
Am 09.04.2008 um 22:17 schrieb Danesh Daroui:
Mark Kosmowski skrev:
Danesh:
Have you tried "mpirun -np 4 --hostfile hosts hostnam
Hi all,
I have a Cluster with Torque and PVFS. I'm trying to test my
environment with MPI-IO Test but some segfault are occurring.
Does anyone know what is happening ? The error output is below:
Rank 1 Host campogrande03.dcc.ufrj.br WARNING ERROR 1207853304: 1 bad
bytes at file offset 0. Expecte
Thanks to those who answered my post in the past. I have to admit that you lost
me about half way through the thread.
I was able to get 2 of my systems cranked up and was about to put a third
system online when I remembered it was running x64 version of OS.
Can I just recompile the code on the x
Open MPI can manage heterogeneous system. Though you prefer to avoid
this because it has a performance penalty. I suggest you compile on
the 32bit machine and use the same version everywhere.
Aurelien
Le 10 avr. 08 à 18:09, clark...@clarktx.com a écrit :
Thanks to those who answered my post i
Thanks for the information. I'll try it out.
>Open MPI can manage heterogeneous system. Though you prefer to avoid
>this because it has a performance penalty. I suggest you compile on
>the 32bit machine and use the same version everywhere.
Aurelien
Le 10 avr. 08 à 18:09, clarkmpi_at_[hidden] a é
20 matches
Mail list logo