Re: [OMPI users] Open MPI and OpenIB

2006-05-11 Thread Brian Barrett

On May 10, 2006, at 10:46 PM, Gurhan Ozen wrote:

 My ultimate goal is to get Open MPI working with openIB stack.  
First, I had
 installed lam-mpi , I know it doesn't have support for openIB but  
it's still
 relevant to some of my questions  I will ask.. Here is the set up  
I have:


Yes, keep in mind throughout that while Open MPI does support MVAPI,  
LAM/MPI will fall back to using IP over IB for communication.


 I have two machines, pe830-01 and pe830-02 .. Both have ethernet  
interface and

 HCA interface. The IP addresses follow:
 eth0 ib0
 pe830-01 10.12.4.32  192.168.1.32
 pe830-02 10.12.4.34  192.168.1.34

   So this has worked even though it lamhosts file is configured to  
use ib0
   interfaces. I further verified with tcpdump command that none of  
this went

   to eth0 ..

   Anyhow, if i change the lamhosts file to use the eth0 IPs,  
things work just
   as the same with no issues . And in that case i see some traffic  
on eth0

   with tcpdump.


Ok, so at least it sounds like your TCP network is sanely configured.


   Now, when i installed and used Open MPI, things didn't work as
easy.. Here is
   what happens. After recompiling the sources with the mpicc that  
comes with

   open-mpi:

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/hello_world
   Hello, world, I am 0 of 2 and this is on : pe830-02.
   Hello, world, I am 1 of 2 and this is on: pe830-01.

   So far so good, using eth0 interfaces.. hello_world works just  
fine. Now,

   when i try the broadcast program:


In reality, you always need to include two BTLs when specifying.  You  
need both the one you want to use (mvapi,openib,tcp,etc.) and  
"self".  You can run into issues otherwise.



   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/broadcast

   It just hangs there, it doesn't prompt me the "Enter the vector  
length:"
   string . So i just enter a number anyway since i know the  
behavior of the

   program:

   10
   Enter the vector length: i am: 0 , and i have 5 vector elements
   i am: 1 , and i have 5 vector elements
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00

   So, that's the first bump with the openmpi.. Now , if i try to  
use ib0

   interfaces instead of eth0 ones, i get:


I'm actually surprised this worked in LAM/MPI, to be honest.  There  
should be an fflush() after the printf() to make sure that the output  
is actually sent out of the application.



   $  /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl openib -np 2 --host  
192.168.1.34,192.168.1.32

   /path/to/hello_world

-- 


   No available btl components were found!

   This means that there are no components of this type installed  
on your

   system or all the components reported that they could not be used.

   This is a fatal error; your MPI process is likely to abort.   
Check the
   output of the "ompi_info" command and ensure that components of  
this

   type are available on your system.  You may also wish to check the
   value of the "component_path" MCA parameter and ensure that it  
has at

   least one directory that contains valid MCA components.


-- 


   [pe830-01.domain.com:05942]

   I know, it thinks that it doesn't have openib components  
installed, however,

   ompi_info on both machines say otherwise:

   $ ompi_info | grep openib
   MCA mpool: openib (MCA v1.0, API v1.0, Component v1.0.2)
   MCA btl: openib (MCA v1.0, API v1.0, Component v1.0.2)


I don't think it will help, but can you try again with --mca btl  
openib,self?  For some reason, it appears that the openib component  
is saying that it can't run.



   Now the questions are...
   1- In the case of using lam/mpi over ib0 interfaces.. Does lam/mpi
   automatically just use IPoIB ?


Yes, LAM has no idea what that Open IB thing is -- it just uses the  
ethernet device.


   2 - Is there a tcpdump-like utility to dump the traffic on  
Infiniband HCAs?


I'm not aware of any, but that may occur.

   3 - In the case of Open MPI, does --mca btl arg option have to  
be passed

   everytime? For example,

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
   /path/to/hello_world

   works just fine, but the same command without the "--mca btl  
tcp" bit gives

   the:


-- 

   It looks like MPI_INIT failed for some re

Re: [OMPI users] Open MPI and OpenIB

2006-05-11 Thread Gurhan Ozen

Brian,
Thanks for the very clear answers.

I did change my code to include fflush() calls after printf() ...

And I did try with --mca btl ib,self . Interesting result, with --mca
btl ib,self it hello_world works fine, but broadcast hangs after i
enter the vector length.

At any rate though, --mca btl ib,self looks like the traffic goes over
ethernet device .. I couldn't find any documentation on the "self"
argument of mca, does it mean to explore alternatives if the desired
btl (in this case ib) doesn't work?

Speaking of documentation, it looks like open-mpi didn't come with a
man for mpirun, i thought i had seen in one of the slides of Open MPI
developer's workshop that it did have mpirun.1 . Do i need to check it
out from svn?

No I don't have any application to run, other than what I might run.
This is all for testing purposes.

Thanks,
Gurhan

On 5/11/06, Brian Barrett  wrote:

On May 10, 2006, at 10:46 PM, Gurhan Ozen wrote:

>  My ultimate goal is to get Open MPI working with openIB stack.
> First, I had
>  installed lam-mpi , I know it doesn't have support for openIB but
> it's still
>  relevant to some of my questions  I will ask.. Here is the set up
> I have:

Yes, keep in mind throughout that while Open MPI does support MVAPI,
LAM/MPI will fall back to using IP over IB for communication.

>  I have two machines, pe830-01 and pe830-02 .. Both have ethernet
> interface and
>  HCA interface. The IP addresses follow:
>  eth0 ib0
>  pe830-01 10.12.4.32  192.168.1.32
>  pe830-02 10.12.4.34  192.168.1.34
>
>So this has worked even though it lamhosts file is configured to
> use ib0
>interfaces. I further verified with tcpdump command that none of
> this went
>to eth0 ..
>
>Anyhow, if i change the lamhosts file to use the eth0 IPs,
> things work just
>as the same with no issues . And in that case i see some traffic
> on eth0
>with tcpdump.

Ok, so at least it sounds like your TCP network is sanely configured.

>Now, when i installed and used Open MPI, things didn't work as
> easy.. Here is
>what happens. After recompiling the sources with the mpicc that
> comes with
>open-mpi:
>
>$ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
>pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
>/path/to/hello_world
>Hello, world, I am 0 of 2 and this is on : pe830-02.
>Hello, world, I am 1 of 2 and this is on: pe830-01.
>
>So far so good, using eth0 interfaces.. hello_world works just
> fine. Now,
>when i try the broadcast program:

In reality, you always need to include two BTLs when specifying.  You
need both the one you want to use (mvapi,openib,tcp,etc.) and
"self".  You can run into issues otherwise.

>$ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
>pls_rsh_agent ssh --mca btl tcp -np 2 --host 10.12.4.34,10.12.4.32
>/path/to/broadcast
>
>It just hangs there, it doesn't prompt me the "Enter the vector
> length:"
>string . So i just enter a number anyway since i know the
> behavior of the
>program:
>
>10
>Enter the vector length: i am: 0 , and i have 5 vector elements
>i am: 1 , and i have 5 vector elements
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>[0] 10.00
>
>So, that's the first bump with the openmpi.. Now , if i try to
> use ib0
>interfaces instead of eth0 ones, i get:

I'm actually surprised this worked in LAM/MPI, to be honest.  There
should be an fflush() after the printf() to make sure that the output
is actually sent out of the application.

>$  /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --mca
>pls_rsh_agent ssh --mca btl openib -np 2 --host
> 192.168.1.34,192.168.1.32
>/path/to/hello_world
>
> --
> 
>No available btl components were found!
>
>This means that there are no components of this type installed
> on your
>system or all the components reported that they could not be used.
>
>This is a fatal error; your MPI process is likely to abort.
> Check the
>output of the "ompi_info" command and ensure that components of
> this
>type are available on your system.  You may also wish to check the
>value of the "component_path" MCA parameter and ensure that it
> has at
>least one directory that contains valid MCA components.
>
>
> --
> 
>[pe830-01.domain.com:05942]
>
>I know, it thinks that it doesn't have openib components
> installed, however,
>ompi_info on both machines say otherwise:
>
>$ ompi_info | grep openib
>MCA mpool: openib (MCA v1.0, API v1.0, Component v1.0.2)
>MCA btl: openib (MCA v1.0, API v1.0, Component v1.0.2)

I don't 

Re: [OMPI users] Open MPI and OpenIB

2006-05-11 Thread Brian Barrett

On May 11, 2006, at 10:10 PM, Gurhan Ozen wrote:


Brian,
Thanks for the very clear answers.

I did change my code to include fflush() calls after printf() ...

And I did try with --mca btl ib,self . Interesting result, with --mca
btl ib,self it hello_world works fine, but broadcast hangs after i
enter the vector length.

At any rate though, --mca btl ib,self looks like the traffic goes over
ethernet device .. I couldn't find any documentation on the "self"
argument of mca, does it mean to explore alternatives if the desired
btl (in this case ib) doesn't work?


No, self is the loopback device, for sending messages to self.  It is  
never used for message routing outside of the current process, but is  
required for almost all transports, as send to self can be a sticky  
issue.


You are specifying openib, not ib, as the argument to mpirun,  
correct?  Either way, I'm not really sure how data could be going  
over TCP -- the TCP transport would definitely be disabled in that  
case.  At this point, I don't know enough about the Open IB driver to  
be of help -- one of the other developers is going to have to jump in  
and provide assistance.



Speaking of documentation, it looks like open-mpi didn't come with a
man for mpirun, i thought i had seen in one of the slides of Open MPI
developer's workshop that it did have mpirun.1 . Do i need to check it
out from svn?


That's one option, or wait for us to release Open MPI 1.0.3 / 1.1.

Brian



On 5/11/06, Brian Barrett  wrote:

On May 10, 2006, at 10:46 PM, Gurhan Ozen wrote:


 My ultimate goal is to get Open MPI working with openIB stack.
First, I had
 installed lam-mpi , I know it doesn't have support for openIB but
it's still
 relevant to some of my questions  I will ask.. Here is the set up
I have:


Yes, keep in mind throughout that while Open MPI does support MVAPI,
LAM/MPI will fall back to using IP over IB for communication.


 I have two machines, pe830-01 and pe830-02 .. Both have ethernet
interface and
 HCA interface. The IP addresses follow:
 eth0 ib0
 pe830-01 10.12.4.32  192.168.1.32
 pe830-02 10.12.4.34  192.168.1.34

   So this has worked even though it lamhosts file is configured to
use ib0
   interfaces. I further verified with tcpdump command that none of
this went
   to eth0 ..

   Anyhow, if i change the lamhosts file to use the eth0 IPs,
things work just
   as the same with no issues . And in that case i see some traffic
on eth0
   with tcpdump.


Ok, so at least it sounds like your TCP network is sanely configured.


   Now, when i installed and used Open MPI, things didn't work as
easy.. Here is
   what happens. After recompiling the sources with the mpicc that
comes with
   open-mpi:

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi -- 
mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host  
10.12.4.34,10.12.4.32

   /path/to/hello_world
   Hello, world, I am 0 of 2 and this is on : pe830-02.
   Hello, world, I am 1 of 2 and this is on: pe830-01.

   So far so good, using eth0 interfaces.. hello_world works just
fine. Now,
   when i try the broadcast program:


In reality, you always need to include two BTLs when specifying.  You
need both the one you want to use (mvapi,openib,tcp,etc.) and
"self".  You can run into issues otherwise.

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi -- 
mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host  
10.12.4.34,10.12.4.32

   /path/to/broadcast

   It just hangs there, it doesn't prompt me the "Enter the vector
length:"
   string . So i just enter a number anyway since i know the
behavior of the
   program:

   10
   Enter the vector length: i am: 0 , and i have 5 vector elements
   i am: 1 , and i have 5 vector elements
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00
   [0] 10.00

   So, that's the first bump with the openmpi.. Now , if i try to
use ib0
   interfaces instead of eth0 ones, i get:


I'm actually surprised this worked in LAM/MPI, to be honest.  There
should be an fflush() after the printf() to make sure that the output
is actually sent out of the application.

   $  /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi  
--mca

   pls_rsh_agent ssh --mca btl openib -np 2 --host
192.168.1.34,192.168.1.32
   /path/to/hello_world

 
--


   No available btl components were found!

   This means that there are no components of this type installed
on your
   system or all the components reported that they could not be  
used.


   This is a fatal error; your MPI process is likely to abort.
Check the
   output of the "ompi_info" command and ensure that components of
this
   type are available on your system.  You may also wish to check  
the

   value of the "component_path" MCA parameter and ensure that it
has at
   

Re: [OMPI users] Open MPI and OpenIB

2006-05-11 Thread Gurhan Ozen

Dagnabbit.. I was specifying ib, not openib .. When i specified
openib, I got this error:

"
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

 PML add procs failed
 --> Returned value -2 instead of OMPI_SUCCESS
--
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
"

I can run it with openib,self locally, even multi processes with -np
greater than one.. But once the other node is in the picture , i got
this error.. Humm does error message help to troubleshoot?

Thanks,
gurhan
On 5/11/06, Brian Barrett  wrote:

On May 11, 2006, at 10:10 PM, Gurhan Ozen wrote:

> Brian,
> Thanks for the very clear answers.
>
> I did change my code to include fflush() calls after printf() ...
>
> And I did try with --mca btl ib,self . Interesting result, with --mca
> btl ib,self it hello_world works fine, but broadcast hangs after i
> enter the vector length.
>
> At any rate though, --mca btl ib,self looks like the traffic goes over
> ethernet device .. I couldn't find any documentation on the "self"
> argument of mca, does it mean to explore alternatives if the desired
> btl (in this case ib) doesn't work?

No, self is the loopback device, for sending messages to self.  It is
never used for message routing outside of the current process, but is
required for almost all transports, as send to self can be a sticky
issue.

You are specifying openib, not ib, as the argument to mpirun,
correct?  Either way, I'm not really sure how data could be going
over TCP -- the TCP transport would definitely be disabled in that
case.  At this point, I don't know enough about the Open IB driver to
be of help -- one of the other developers is going to have to jump in
and provide assistance.

> Speaking of documentation, it looks like open-mpi didn't come with a
> man for mpirun, i thought i had seen in one of the slides of Open MPI
> developer's workshop that it did have mpirun.1 . Do i need to check it
> out from svn?

That's one option, or wait for us to release Open MPI 1.0.3 / 1.1.

Brian


> On 5/11/06, Brian Barrett  wrote:
>> On May 10, 2006, at 10:46 PM, Gurhan Ozen wrote:
>>
>>>  My ultimate goal is to get Open MPI working with openIB stack.
>>> First, I had
>>>  installed lam-mpi , I know it doesn't have support for openIB but
>>> it's still
>>>  relevant to some of my questions  I will ask.. Here is the set up
>>> I have:
>>
>> Yes, keep in mind throughout that while Open MPI does support MVAPI,
>> LAM/MPI will fall back to using IP over IB for communication.
>>
>>>  I have two machines, pe830-01 and pe830-02 .. Both have ethernet
>>> interface and
>>>  HCA interface. The IP addresses follow:
>>>  eth0 ib0
>>>  pe830-01 10.12.4.32  192.168.1.32
>>>  pe830-02 10.12.4.34  192.168.1.34
>>>
>>>So this has worked even though it lamhosts file is configured to
>>> use ib0
>>>interfaces. I further verified with tcpdump command that none of
>>> this went
>>>to eth0 ..
>>>
>>>Anyhow, if i change the lamhosts file to use the eth0 IPs,
>>> things work just
>>>as the same with no issues . And in that case i see some traffic
>>> on eth0
>>>with tcpdump.
>>
>> Ok, so at least it sounds like your TCP network is sanely configured.
>>
>>>Now, when i installed and used Open MPI, things didn't work as
>>> easy.. Here is
>>>what happens. After recompiling the sources with the mpicc that
>>> comes with
>>>open-mpi:
>>>
>>>$ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --
>>> mca
>>>pls_rsh_agent ssh --mca btl tcp -np 2 --host
>>> 10.12.4.34,10.12.4.32
>>>/path/to/hello_world
>>>Hello, world, I am 0 of 2 and this is on : pe830-02.
>>>Hello, world, I am 1 of 2 and this is on: pe830-01.
>>>
>>>So far so good, using eth0 interfaces.. hello_world works just
>>> fine. Now,
>>>when i try the broadcast program:
>>
>> In reality, you always need to include two BTLs when specifying.  You
>> need both the one you want to use (mvapi,openib,tcp,etc.) and
>> "self".  You can run into issues otherwise.
>>
>>>$ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --
>>> mca
>>>pls_rsh_agent ssh --mca btl tcp -np 2 --host
>>> 10.12.4.34,10.12.4.32
>>>/path/to/broadcast
>>>
>>>It just hangs there, it doesn't prompt me the "Enter the vector
>>> length:"
>>>string . So i just enter a number anyway since i know the
>>> behavior of the
>>>program:
>>>
>>>10
>>>Enter the vector length: i am: 0 , and i ha

Re: [OMPI users] Open MPI and OpenIB

2006-05-11 Thread George Bosilca
This message indicate that one of the nodes is not able to setup a  
route to the peer using the openib device. Did you run any openib  
tests on your cluster ? I mean any tests which do not involve MPI ?


Otherwise if you compile in mode debug there are 2 parameters you can  
use to get more information out of the system. You should use "--mca  
btl_base_debug 1" and "--mca btl_base_verbose 100". If you don't have  
a debug mode open mpi, it might happens that nothing will be printed.


Personally I would do these 2 things before anything else:
1. make sure that all (or some) of the openib basic tests succeed on  
your cluster.

2. use these 2 mca parameters to get more information from the system.

  Thanks,
george.

On May 11, 2006, at 5:06 PM, Gurhan Ozen wrote:


Dagnabbit.. I was specifying ib, not openib .. When i specified
openib, I got this error:

"
-- 

It looks like MPI_INIT failed for some reason; your parallel  
process is

likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or  
environment

problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  PML add procs failed
  --> Returned value -2 instead of OMPI_SUCCESS
-- 


*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
"

I can run it with openib,self locally, even multi processes with -np
greater than one.. But once the other node is in the picture , i got
this error.. Humm does error message help to troubleshoot?

Thanks,
gurhan
On 5/11/06, Brian Barrett  wrote:

On May 11, 2006, at 10:10 PM, Gurhan Ozen wrote:


Brian,
Thanks for the very clear answers.

I did change my code to include fflush() calls after printf() ...

And I did try with --mca btl ib,self . Interesting result, with -- 
mca

btl ib,self it hello_world works fine, but broadcast hangs after i
enter the vector length.

At any rate though, --mca btl ib,self looks like the traffic goes  
over

ethernet device .. I couldn't find any documentation on the "self"
argument of mca, does it mean to explore alternatives if the desired
btl (in this case ib) doesn't work?


No, self is the loopback device, for sending messages to self.  It is
never used for message routing outside of the current process, but is
required for almost all transports, as send to self can be a sticky
issue.

You are specifying openib, not ib, as the argument to mpirun,
correct?  Either way, I'm not really sure how data could be going
over TCP -- the TCP transport would definitely be disabled in that
case.  At this point, I don't know enough about the Open IB driver to
be of help -- one of the other developers is going to have to jump in
and provide assistance.


Speaking of documentation, it looks like open-mpi didn't come with a
man for mpirun, i thought i had seen in one of the slides of Open  
MPI
developer's workshop that it did have mpirun.1 . Do i need to  
check it

out from svn?


That's one option, or wait for us to release Open MPI 1.0.3 / 1.1.

Brian



On 5/11/06, Brian Barrett  wrote:

On May 10, 2006, at 10:46 PM, Gurhan Ozen wrote:


 My ultimate goal is to get Open MPI working with openIB stack.
First, I had
 installed lam-mpi , I know it doesn't have support for openIB but
it's still
 relevant to some of my questions  I will ask.. Here is the set up
I have:


Yes, keep in mind throughout that while Open MPI does support  
MVAPI,

LAM/MPI will fall back to using IP over IB for communication.


 I have two machines, pe830-01 and pe830-02 .. Both have ethernet
interface and
 HCA interface. The IP addresses follow:
 eth0 ib0
 pe830-01 10.12.4.32  192.168.1.32
 pe830-02 10.12.4.34  192.168.1.34

   So this has worked even though it lamhosts file is  
configured to

use ib0
   interfaces. I further verified with tcpdump command that  
none of

this went
   to eth0 ..

   Anyhow, if i change the lamhosts file to use the eth0 IPs,
things work just
   as the same with no issues . And in that case i see some  
traffic

on eth0
   with tcpdump.


Ok, so at least it sounds like your TCP network is sanely  
configured.



   Now, when i installed and used Open MPI, things didn't work as
easy.. Here is
   what happens. After recompiling the sources with the mpicc that
comes with
   open-mpi:

   $ /usr/local/openmpi/bin/mpirun  --prefix /usr/local/openmpi --
mca
   pls_rsh_agent ssh --mca btl tcp -np 2 --host
10.12.4.34,10.12.4.32
   /path/to/hello_world
   Hello, world, I am 0 of 2 and this is on : pe830-02.
   Hello, world, I am 1 of 2 and this is on: pe830-01.

   So far so good, using eth0 interfaces.. hello_world works just
fine. Now,
   when i try the broadcast program:


In real

[OMPI users] ParaView runtime problem with openmpi 1.0.2

2006-05-11 Thread W. Bryan Smith

hi,

i have compiled a program called ParaView (paraview.org) with MPI support
using OpenMPI 1.0.2, and when i try to run the paraview executable using

mpiexec -n 4 paraview

or

miprun -np 4 paraview

instead of having one paraview window open with parallel support, there are
4 paraview windows opened, none of which are running with parallel support.
attached are the ompi_info and config.log files.  below is the text of the
cmake call i used to configure paraview:

cmake -DVTK_USE_MPI:BOOL=ON
-DMPI_INCLUDE_PATH:PATH=/local2/openmpi1.0.2/include/
-DVTK_MPIRUN_EXE:FILEPATH=/local2/openmpi1.0.2/bin/mpirun
-DMPI_LIBRARY:FILEPATH=/local2/openmpi1.0.2/lib/libmpicxx.la
/local2/paraview-2.4.3/

i also edited the ParaView CMakeLists.txt file to contain:
SET(CMAKE_C_COMPILER mpicc)
SET(CMAKE_CXX_COMPILER mpicxx)

both compiler wrappers are on the top of my PATH.  also, as far as PATH
goes, yes, i am certain that the mpiexec and paraview binaries are the ones
i think they are (i.e. when I WHICH MPIEXEC it only shows the one i compiled
locally, etc).

anyone have any insight on this?  for the record, when i compile paraview
with MPI support using mpich2 (1.0.3), and then do mpiexec calling that
version of paraview, i get the expected behavior (i.e. one paraview window
running with parallel support).

thanks in advance,
bryan smith

Open MPI: 1.0.2
   Open MPI SVN revision: r9571
Open RTE: 1.0.2
   Open RTE SVN revision: r9571
OPAL: 1.0.2
   OPAL SVN revision: r9571
  Prefix: /local2/openmpi1.0.2/
 Configured architecture: x86_64-unknown-linux-gnu
   Configured by: bryan
   Configured on: Thu May 11 10:57:02 PDT 2006
  Configure host: iridium
Built by: bryan
Built on: Thu May 11 11:08:44 PDT 2006
  Built host: iridium
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: no
  Fortran90 bindings: no
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: none
  Fortran77 compiler abs: none
  Fortran90 compiler: none
  Fortran90 compiler abs: none
 C profiling: yes
   C++ profiling: yes
 Fortran77 profiling: no
 Fortran90 profiling: no
  C++ exceptions: no
  Thread support: posix (mpi: no, progress: no)
  Internal debug support: no
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: 1
  MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component v1.0.2)
   MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.0.2)
   MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.0.2)
   MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.0.2)
   MCA timer: linux (MCA v1.0, API v1.0, Component v1.0.2)
   MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
   MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
MCA coll: basic (MCA v1.0, API v1.0, Component v1.0.2)
MCA coll: self (MCA v1.0, API v1.0, Component v1.0.2)
MCA coll: sm (MCA v1.0, API v1.0, Component v1.0.2)
  MCA io: romio (MCA v1.0, API v1.0, Component v1.0.2)
   MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0.2)
 MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0.2)
 MCA pml: teg (MCA v1.0, API v1.0, Component v1.0.2)
 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.0.2)
 MCA bml: r2 (MCA v1.0, API v1.0, Component v1.0.2)
 MCA ptl: self (MCA v1.0, API v1.0, Component v1.0.2)
 MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0.2)
 MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0.2)
 MCA btl: self (MCA v1.0, API v1.0, Component v1.0.2)
 MCA btl: sm (MCA v1.0, API v1.0, Component v1.0.2)
 MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
MCA topo: unity (MCA v1.0, API v1.0, Component v1.0.2)
 MCA gpr: null (MCA v1.0, API v1.0, Component v1.0.2)
 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0.2)
 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0.2)
 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0.2)
 MCA iof: svc (MCA v1.0, API v1.0, Component v1.0.2)
  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0.2)
  MCA ns: replica (MCA v1.0, API v1.0, Component v1.0.2)
 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
 MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.0.2)
 MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.0.2)
 MCA ras: localhost (MCA v1.0, API v1.0, Compon

[OMPI users] 64-Bit MIPS support patch

2006-05-11 Thread Jonathan Day
Hi,

As I've said before, I've been working on MIPS support
for OpenMPI, as the current implementation is
Irix-specific in places. Well, it is finally done and
I present to you fixes for Linux on MIPS, some fixes
for atomic operations bugs on the MIPS platform, and a
fix for a GCC bug where it doesn't handle macros
unless the file extension is .S, rather than .s.

Enjoy!

Jonathan Day


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com --- ompi-original/opal/include/opal/sys/mips/atomic.h	2006-05-10 17:30:39.0 +
+++ patched/opal/include/opal/sys/mips/atomic.h	2006-05-11 02:12:28.0 +
@@ -47,14 +47,20 @@

 #define OPAL_HAVE_ATOMIC_CMPSET_32 1
 #define OPAL_HAVE_ATOMIC_CMPSET_64 1
+#define OPAL_HAVE_ATOMIC_MATH_32 0
+#define OPAL_HAVE_ATOMIC_ADD_32 1
+#define OPAL_HAVE_ATOMIC_SUB_32 1
+#define OPAL_HAVE_ATOMIC_ADD_64 1
+#define OPAL_HAVE_ATOMIC_SUB_64 1


+#if OMPI_GCC_INLINE_ASSEMBLY
+
 /**
  *
  * Memory Barriers
  *
  */
-#if OMPI_GCC_INLINE_ASSEMBLY

 static inline
 void opal_atomic_mb(void)
@@ -76,14 +82,11 @@
 WMB();
 }

-#endif
-
 /**
  *
  * Atomic math operations
  *
  */
-#if OMPI_GCC_INLINE_ASSEMBLY

 static inline int opal_atomic_cmpset_32(volatile int32_t *addr,
 int32_t oldval, int32_t newval)
@@ -92,19 +95,22 @@
 int32_t tmp;

__asm__ __volatile__ ("\t"
- ".set noreorder\n"
- "1:\n\t"
- "ll %0, %2 \n\t" /* load *addr into ret */
- "bne%0, %3, 2f   \n\t" /* done if oldval != ret */
- "or %5, %4, 0  \n\t" /* ret = newval */
- "sc %5, %2 \n\t" /* store ret in *addr */
- /* note: ret will be 0 if failed, 1 if succeeded */
-			 "bne%5, 1, 1b   \n\t"
- "2: \n\t"
- ".set reorder  \n"
- : "=&r"(ret), "=m"(*addr)
- : "m"(*addr), "r"(oldval), "r"(newval), "r"(tmp)
+ ".set noreorder\n"
+ "1:\n\t"
+ "ll  %0, 0(%2)\n\t"/* load-linked *addr into ret */
+ "bne %0, %3, 2f\n\t"   /* return 0 if oldval != ret */
+ "or  %1, $0, %4\n\t"   /* tmp = newval */
+ "sc  %1, 0(%2)\n\t"/* store-conditional tmp into *addr */
+ /* note: tmp will be 0 if store failed, 1 if succeeded */
+ "beq %1, $0, 1b\n\t"   /* repeat if tmp == 0 */
+ "nop\n\t"
+ "sync\n"
+ "2:\n\t"
+ ".set reorder\n"
+ : "=&r"(ret), "=&r"(tmp)
+ : "r"(addr), "r"(oldval), "r"(newval)
  : "cc", "memory");
+
return (ret == oldval);
 }

@@ -141,19 +147,20 @@
 int64_t tmp;

__asm__ __volatile__ ("\t"
- ".set noreorder\n"
- "1:\n\t"
- "lld%0, %2 \n\t" /* load *addr into ret */
- "bne%0, %3, 2f   \n\t" /* done if oldval != ret */
- "or %5, %4, 0  \n\t" /* tmp = newval */
- "scd%5, %2 \n\t" /* store tmp in *addr */
- /* note: ret will be 0 if failed, 1 if succeeded */
-			 "bne%5, 1, 1b   \n"
- "2: \n\t"
- ".set reorder  \n"
- : "=&r" (ret), "=m" (*addr)
- : "m" (*addr), "r" (oldval), "r" (newval),
-			   "r"(tmp)
+ ".set noreorder\n"
+ "1:\n\t"
+ "lld %0, 0(%2)\n\t"/* load-linked *addr into ret */
+ "bne %0, %3, 2f\n\t"   /* return 0 if oldval != ret */
+ "or  %1, $0, %4\n\t"   /* tmp = newval */
+ "scd %1, 0(%2)\n\t"/* store-conditional tmp into *addr */
+ /* note: tmp will be 0 if store failed, 1 if succeeded */
+ "beq %1, $0, 1b\n\t"   /* repeat if tmp == 0 */
+ "nop\n\t"
+ "sync\n"
+ "2:\n\t"
+ ".set reorder\n"
+   

Re: [OMPI users] 64-Bit MIPS support patch

2006-05-11 Thread Durga Choudhury

Thanks, Jonathan

This patch would be particularly useful for me.

Best regards

Durga


On 5/11/06, Jonathan Day  wrote:


Hi,

As I've said before, I've been working on MIPS support
for OpenMPI, as the current implementation is
Irix-specific in places. Well, it is finally done and
I present to you fixes for Linux on MIPS, some fixes
for atomic operations bugs on the MIPS platform, and a
fix for a GCC bug where it doesn't handle macros
unless the file extension is .S, rather than .s.

Enjoy!

Jonathan Day


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






--
Devil wanted omnipresence;
He therefore created communists.