On Apr 16, 2006, at 2:10 PM, Lee D. Peterson wrote:
The firewires are all inactive. The only difference from ipconfig
is that the cluster head has a ppp0 defined because of a VPN I'm
using. I tried -mac btl_tcp_if_exclude ppp0, but that didn't work.
So I logged off the VPN, and now ifconfig
Brian,
The firewires are all inactive. The only difference from ipconfig is
that the cluster head has a ppp0 defined because of a VPN I'm using.
I tried -mac btl_tcp_if_exclude ppp0, but that didn't work.
So I logged off the VPN, and now ifconfig -a on the cluster head does
not show the p
On Apr 16, 2006, at 1:29 PM, Lee D. Peterson wrote:
Thanks for your help. The hanging problem came back again a day ago.
However, I can now run only if I use either "-mca btl_tcp_if_include
en0" or "-mca btl_tcp_if_include en1". Using btl_tcp_if_exclude on
either en0 or en1 doesn't work.
That'
Brian,
Thanks for your help. The hanging problem came back again a day ago.
However, I can now run only if I use either "-mca btl_tcp_if_include
en0" or "-mca btl_tcp_if_include en1". Using btl_tcp_if_exclude on
either en0 or en1 doesn't work.
Regarding the TCP performance, I ran the HPL
On Apr 16, 2006, at 11:52 AM, Shekhar Tyagi wrote:
I am new to MPI and prallel programming, recently i made two
programs one in C
and other in C++. The cluster on which i work is able to compile
and execute
the C program but its not able to make an executable file for C++
program.
The comma
Hi all
I am new to MPI and prallel programming, recently i made two programs one in C
and other in C++. The cluster on which i work is able to compile and execute
the C program but its not able to make an executable file for C++ program.
The command i am using is mpiCC for C++ program but it look
On Apr 16, 2006, at 11:32 AM, Sang Chul Choi wrote:
As an another similar question about installation,
I think that installation of Open MPI should be done on the
master and all slave nodes. A program which use MPI feature
also seems to have to be installed on the master and all slave nodes
unle
As an another similar question about installation,
I think that installation of Open MPI should be done on the
master and all slave nodes. A program which use MPI feature
also seems to have to be installed on the master and all slave nodes
unless I use NFS. My question is that:
if I used OpenPBS so
On Apr 14, 2006, at 9:33 AM, Lee D. Peterson wrote:
This problem went away yesterday. There was no intervening reboot of
my cluster or a recompile of the code. So all I can surmise is
something got cleaned up in a cron script. Wierd.
Very strange. Could there have been a networking issue (swi
Thank you, Brian:
It worked.
Thank you, again.
On 4/16/06, Brian Barrett wrote:
>
> On Apr 16, 2006, at 8:18 AM, Sang Chul Choi wrote:
>
> > 1. I could not find any document except FAQ and mailing list
> > for Open MPI. Is there any user manual or something like that?
> > Or, the LAM MPI's manu
On Apr 16, 2006, at 8:18 AM, Sang Chul Choi wrote:
1. I could not find any document except FAQ and mailing list
for Open MPI. Is there any user manual or something like that?
Or, the LAM MPI's manual can be used instead?
Unfortunately, at this time, the only documentation available for
Open
Hi, I know that I have to include some information to get the
right answer. But, I have one simple question which does not
need that information.
1. I could not find any document except FAQ and mailing list
for Open MPI. Is there any user manual or something like that?
Or, the LAM MPI's manual can
12 matches
Mail list logo