[OMPI users] Hybrid MPI/Pthreads program behaves differently on two different machines with same hardware

2011-10-24 Thread 吕慧伟
Dear List,

I have a hybrid MPI/Pthreads program named "my_hybrid_app", this program is
memory-intensive and take advantage of multi-threading to improve memory
throughput. I run "my_hybrid_app" on two machines, which have same hardware
configuration but different OS and GCC. The problem is: when I run
"my_hybrid_app" with one process, two machines behaves the same: the more
number of threads, the better the performance; however, when I
run "my_hybrid_app" with two or more processes. The first machine still
increase performance with more threads, the second machine degrades in
performance with more threads.

Since running "my_hybrid_app" with one process behaves correctly, I suspect
my linking to MPI library has some problem. Would somebody point me in the
right direction? Thanks in advance.

Attached are the commandline used, my machine informantion and link
informantion.
p.s. 1: Commandline

single process: ./my_hybrid_app 
multiple process: mpirun -np 2 ./my_hybrid_app 

p.s. 2: Machine Informantion

The first machine is CentOS 5.3 with GCC 4.1.2:

Target: x86_64-redhat-linux

Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--enable-checking=release --with-system-zlib --enable-__cxa_atexit
--disable-libunwind-exceptions --enable-libgcj-multifile
--enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk
--disable-dssi --enable-plugin
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic
--host=x86_64-redhat-linux

Thread model: posix

gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)

The second machine is SUSE Enterprise Server 11 with GCC 4.3.4:

Target: x86_64-suse-linux

Configured with: ../configure --prefix=/usr --infodir=/usr/share/info
--mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64
--enable-languages=c,c++,objc,fortran,obj-c++,java,ada
--enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3
--enable-ssp --disable-libssp
--with-bugurl=http://bugs.opensuse.org/--with-pkgversion='SUSE Linux'
--disable-libgcj --disable-libmudflap
--with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit
--enable-libstdcxx-allocator=new --disable-libstdcxx-pch
--enable-version-specific-runtime-libs --program-suffix=-4.3
--enable-linux-futex --without-system-libunwind --with-cpu=generic
--build=x86_64-suse-linux

Thread model: posix

gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux)


p.s. 3: ldd Informantion

The first machine:
$ ldd my_hybrid_app
libm.so.6 => /lib64/libm.so.6 (0x00358d40)
libmpi.so.0 => /usr/local/openmpi/lib/libmpi.so.0
(0x2af0d53a7000)
libopen-rte.so.0 => /usr/local/openmpi/lib/libopen-rte.so.0
(0x2af0d564a000)
libopen-pal.so.0 => /usr/local/openmpi/lib/libopen-pal.so.0
(0x2af0d5895000)
libdl.so.2 => /lib64/libdl.so.2 (0x00358d00)
libnsl.so.1 => /lib64/libnsl.so.1 (0x00358f00)
libutil.so.1 => /lib64/libutil.so.1 (0x00359a60)
libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x2af0d5b07000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00358d80)
libc.so.6 => /lib64/libc.so.6 (0x00358cc0)
/lib64/ld-linux-x86-64.so.2 (0x00358c80)
librt.so.1 => /lib64/librt.so.1 (0x00358dc0)
The second machine:
$ ldd my_hybrid_app
linux-vdso.so.1 =>  (0x7fff3eb5f000)
libmpi.so.0 => /root/opt/openmpi/lib/libmpi.so.0
(0x7f68627a1000)
libm.so.6 => /lib64/libm.so.6 (0x7f686254b000)
libopen-rte.so.0 => /root/opt/openmpi/lib/libopen-rte.so.0
(0x7f68622fc000)
libopen-pal.so.0 => /root/opt/openmpi/lib/libopen-pal.so.0
(0x7f68620a5000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f6861ea1000)
libnsl.so.1 => /lib64/libnsl.so.1 (0x7f6861c89000)
libutil.so.1 => /lib64/libutil.so.1 (0x7f6861a86000)
libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x7f686187d000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f686166)
libc.so.6 => /lib64/libc.so.6 (0x7f6861302000)
/lib64/ld-linux-x86-64.so.2 (0x7f6862a58000)
librt.so.1 => /lib64/librt.so.1 (0x7f68610f9000)
I installed openmpi-1.4.2 to a user directory /root/opt/openmpi and use
"-L/root/opt/openmpi -Wl,-rpath,/root/opt/openmpi" when linking.

-- 
Huiwei Lv
PhD. student at Institute of Computing Technology,
Beijing, China
http://asg.ict.ac.cn/lhw


Re: [OMPI users] Hybrid MPI/Pthreads program behaves differently on two different machines with same hardware

2011-10-24 Thread Ralph Castain
Does the difference persist if you run the single process using mpirun? In 
other words, does "mpirun -np 1 ./my_hybrid_app..." behave the same as "mpirun 
-np 2 ./..."?

There is a slight difference in the way procs start when run as singletons. It 
shouldn't make a difference here, but worth testing.

On Oct 24, 2011, at 12:37 AM, 吕慧伟 wrote:

> Dear List,
> 
> I have a hybrid MPI/Pthreads program named "my_hybrid_app", this program is 
> memory-intensive and take advantage of multi-threading to improve memory 
> throughput. I run "my_hybrid_app" on two machines, which have same hardware 
> configuration but different OS and GCC. The problem is: when I run 
> "my_hybrid_app" with one process, two machines behaves the same: the more 
> number of threads, the better the performance; however, when I run 
> "my_hybrid_app" with two or more processes. The first machine still increase 
> performance with more threads, the second machine degrades in performance 
> with more threads. 
> 
> Since running "my_hybrid_app" with one process behaves correctly, I suspect 
> my linking to MPI library has some problem. Would somebody point me in the 
> right direction? Thanks in advance.
> 
> Attached are the commandline used, my machine informantion and link 
> informantion.
> p.s. 1: Commandline
> single process: ./my_hybrid_app 
> multiple process: mpirun -np 2 ./my_hybrid_app 
> 
> p.s. 2: Machine Informantion
> The first machine is CentOS 5.3 with GCC 4.1.2:
> Target: x86_64-redhat-linux
> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
> --infodir=/usr/share/info --enable-shared --enable-threads=posix 
> --enable-checking=release --with-system-zlib --enable-__cxa_atexit 
> --disable-libunwind-exceptions --enable-libgcj-multifile 
> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk 
> --disable-dssi --enable-plugin 
> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic 
> --host=x86_64-redhat-linux
> Thread model: posix
> gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)
> The second machine is SUSE Enterprise Server 11 with GCC 4.3.4:
> Target: x86_64-suse-linux
> Configured with: ../configure --prefix=/usr --infodir=/usr/share/info 
> --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64 
> --enable-languages=c,c++,objc,fortran,obj-c++,java,ada 
> --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3 
> --enable-ssp --disable-libssp --with-bugurl=http://bugs.opensuse.org/ 
> --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap 
> --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit 
> --enable-libstdcxx-allocator=new --disable-libstdcxx-pch 
> --enable-version-specific-runtime-libs --program-suffix=-4.3 
> --enable-linux-futex --without-system-libunwind --with-cpu=generic 
> --build=x86_64-suse-linux
> Thread model: posix
> gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux)
> 
> p.s. 3: ldd Informantion
> The first machine:
> $ ldd my_hybrid_app
> libm.so.6 => /lib64/libm.so.6 (0x00358d40)
> libmpi.so.0 => /usr/local/openmpi/lib/libmpi.so.0 (0x2af0d53a7000)
> libopen-rte.so.0 => /usr/local/openmpi/lib/libopen-rte.so.0 
> (0x2af0d564a000)
> libopen-pal.so.0 => /usr/local/openmpi/lib/libopen-pal.so.0 
> (0x2af0d5895000)
> libdl.so.2 => /lib64/libdl.so.2 (0x00358d00)
> libnsl.so.1 => /lib64/libnsl.so.1 (0x00358f00)
> libutil.so.1 => /lib64/libutil.so.1 (0x00359a60)
> libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x2af0d5b07000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x00358d80)
> libc.so.6 => /lib64/libc.so.6 (0x00358cc0)
> /lib64/ld-linux-x86-64.so.2 (0x00358c80)
> librt.so.1 => /lib64/librt.so.1 (0x00358dc0)
> The second machine:
> $ ldd my_hybrid_app
> linux-vdso.so.1 =>  (0x7fff3eb5f000)
> libmpi.so.0 => /root/opt/openmpi/lib/libmpi.so.0 (0x7f68627a1000)
> libm.so.6 => /lib64/libm.so.6 (0x7f686254b000)
> libopen-rte.so.0 => /root/opt/openmpi/lib/libopen-rte.so.0 
> (0x7f68622fc000)
> libopen-pal.so.0 => /root/opt/openmpi/lib/libopen-pal.so.0 
> (0x7f68620a5000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f6861ea1000)
> libnsl.so.1 => /lib64/libnsl.so.1 (0x7f6861c89000)
> libutil.so.1 => /lib64/libutil.so.1 (0x7f6861a86000)
> libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x7f686187d000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f686166)
> libc.so.6 => /lib64/libc.so.6 (0x7f6861302000)
> /lib64/ld-linux-x86-64.so.2 (0x7f6862a58000)
> librt.so.1 => /lib64/librt.so.1 (0x7f68610f9000)
> I installed openmpi-1.4.2 to a user directory /root/opt/openmpi and use 
> "-L/root/opt/openmpi -Wl,-rpath,/root/opt/openmpi" when linking.
> -- 
> Huiwei Lv
> PhD. student at

[OMPI users] Checkpoint from inside MPI program with OpenMPI 1.4.2 ?

2011-10-24 Thread Nguyen Toan
Dear all,

I want to automatically checkpoint an MPI program with OpenMPI ( I'm
currently using 1.4.2 version with BLCR 0.8.2),
not by manually typing ompi-checkpoint command line from another terminal.
So I would like to know if there is a way to call checkpoint function from
inside an MPI program
with OpenMPI or how to do that.
Any ideas are very appreciated.

Regards,
Nguyen Toan


[OMPI users] Visual debugging on the cluster

2011-10-24 Thread devendra rai
Hello Community,

I have been struggling with visual debugging on cluster machines. So far, I 
tried to work around the problem, or total avoid it, but no more.

I have three machines on the cluster: a.s1.s2, b.s1.s2 and c.s1.s2. I do not 
have admin privileges on any of these machines.

Now, I want to run a visual debugger on all of these machines, and have the 
windows come up. 


So for from: (http://www.open-mpi.org/faq/? category=running)

13. Can I run GUI applications with Open MPI?  
Yes, but it will depend on your local setup and may require
additional setup. 
In short: you will need to have X forwarding enabled from the remote
processes to the display where you want output to appear.  In a secure
environment, you can simply allow all X requests to be shown on the
target display and set the DISPLAY environment variable in all MPI
process' environments to the target display, perhaps something like
this: 
shell$ hostname my_desktop.secure-cluster. example.com shell$ xhost +
shell$ mpirun -np 4 -x DISPLAY=my_desktop.secure- cluster.example.com a.out  
However, this technique is not generally suitable for unsecure
environments (because it allows anyone to read and write to your
display).  A slightly more secure way is to only allow X connections
from the nodes where your application will be running: 
shell$ hostname my_desktop.secure-cluster. example.com shell$ xhost +compute1 
+compute2 +compute3 +compute4
compute1 being added to access control list
compute2 being added to access control list
compute3 being added to access control list
compute4 being added to access control list
shell$ mpirun -np 4 -x DISPLAY=my_desktop.secure- cluster.example.com a.out  
(assuming that the four nodes you are running on are compute1 through 
compute4). 
Other methods are available, but they involve sophisticated X
forwarding through mpirun and are generally more complicated than
desirable.
This still gives me "Error: Can't open display:" problem. 

My mpirun shell script contains:

mpirun-1.4.3 -hostfile hostfile -np 3 -v -nooversubscribe --rankfile 
rankfile.txt --report-bindings  -timestamp-output ./testdisplay-window.sh 


where rankfile and hostfile contain a.s1.s2, b.s1.s2 and c.s1.s2, and are 
proper.

The file ./testdisplay-window.sh:

#!/bin/bash
echo "Running xeyes on `hostname`"
DISPLAY=a.s1.s2:11.0
xeyes
exit 0

I see that my xauth list output already contains entries like:

a.s1.s2/unix:12  MIT-MAGIC-COOKIE-1  aa16a9573f42224d760c7bb618b48a6f
a.s1.s2/unix:10  MIT-MAGIC-COOKIE-1  0fb6fe3c2e35676136c8642412fb5809
a.s1.s2/unix:11  MIT-MAGIC-COOKIE-1  a3a65970b5f545bc750e3520a4e3b872


I seem to have run out of ideas now.

However, this works prefectly on any of the machines a.s1.s2, b.s1.s2or c.s1.s2:

(for example, running from a.s1.s2):

ssh b.s1.s2 xeyes

Can someone help?


Best

Devendra Rai







From: Jeff Squyres 
To: devendra rai ; Open MPI Users 
Sent: Friday, 21 October 2011, 13:14
Subject: Re: [OMPI users] orte_grpcomm_modex failed

This usually means that you have a Open MPI version mismatch between some of 
your nodes.  Meaning: on some nodes, you're finding version X.Y.Z of Open MPI 
by default, but on other nodes, you're finding version A.B.C.


On Oct 21, 2011, at 7:00 AM, devendra rai wrote:

> Hello Community,
> 
> I have been struggling with this error for quite some time:
> 
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   orte_grpcomm_modex failed
>   --> Returned "Data unpack would read
 past end of buffer" (-26) instead of "Success" (0)
> --
> --
> mpirun has exited due to process rank 1 with PID 18945 on
> node tik35x.ethz.ch exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> 
> I am running this on a cluster and this has started happening only after a 
> recent rebuild of openmpi-1.4.3. Interestingly, I have the same version of 
> openmpi on my PC, and the same application works fine.
> 
> I have looked into this error on the web, but there is very little 
> discussion, on the causes, or how to correct it. I asked the admin to attempt 
> a re-install of openmpi, but I am not sure whether this will solve the 
> problem.
> 
> Can
 some one please help?
> 
> Thanks a lot.
> 
> Best,
> 
> Devendra Rai
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisc

Re: [OMPI users] Visual debugging on the cluster

2011-10-24 Thread Meredith Creekmore
Not a direct answer to your question, but have you tried using Eclipse with the 
Parallel Platform Tools installed?

http://eclipse.org/ptp/

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of devendra rai
Sent: Monday, October 24, 2011 2:50 PM
To: us...@open-mpi.org
Subject: [OMPI users] Visual debugging on the cluster

Hello Community,


I have been struggling with visual debugging on cluster machines. So far, I 
tried to work around the problem, or total avoid it, but no more.


I have three machines on the cluster: a.s1.s2, b.s1.s2 and c.s1.s2. I do not 
have admin privileges on any of these machines.


Now, I want to run a visual debugger on all of these machines, and have the 
windows come up.



So for from: (http://www.open-mpi.org/faq/? 
category=running)


13. Can I run GUI applications with Open MPI?
Yes, but it will depend on your local setup and may require additional setup.
In short: you will need to have X forwarding enabled from the remote processes 
to the display where you want output to appear. In a secure environment, you 
can simply allow all X requests to be shown on the target display and set the 
DISPLAY environment variable in all MPI process' environments to the target 
display, perhaps something like this:

shell$ hostname

my_desktop.secure-cluster. 
example.com

shell$ xhost +

shell$ mpirun -np 4 -x DISPLAY=my_desktop.secure- 
cluster.example.com a.out

However, this technique is not generally suitable for unsecure environments 
(because it allows anyone to read and write to your display). A slightly more 
secure way is to only allow X connections from the nodes where your application 
will be running:

shell$ hostname

my_desktop.secure-cluster. 
example.com

shell$ xhost +compute1 +compute2 +compute3 +compute4

compute1 being added to access control list

compute2 being added to access control list

compute3 being added to access control list

compute4 being added to access control list

shell$ mpirun -np 4 -x DISPLAY=my_desktop.secure- 
cluster.example.com a.out

(assuming that the four nodes you are running on are compute1 through compute4).
Other methods are available, but they involve sophisticated X forwarding 
through mpirun and are generally more complicated than desirable.

This still gives me "Error: Can't open display:" problem.

My mpirun shell script contains:

mpirun-1.4.3 -hostfile hostfile -np 3 -v -nooversubscribe --rankfile 
rankfile.txt --report-bindings  -timestamp-output ./testdisplay-window.sh


where rankfile and hostfile contain a.s1.s2, b.s1.s2 and c.s1.s2, and are 
proper.

The file ./testdisplay-window.sh:

#!/bin/bash
echo "Running xeyes on `hostname`"
DISPLAY=a.s1.s2:11.0
xeyes
exit 0

I see that my xauth list output already contains entries like:

a.s1.s2/unix:12  MIT-MAGIC-COOKIE-1  aa16a9573f42224d760c7bb618b48a6f
a.s1.s2/unix:10  MIT-MAGIC-COOKIE-1  0fb6fe3c2e35676136c8642412fb5809
a.s1.s2/unix:11  MIT-MAGIC-COOKIE-1  a3a65970b5f545bc750e3520a4e3b872


I seem to have run out of ideas now.

However, this works prefectly on any of the machines a.s1.s2, b.s1.s2 or 
c.s1.s2:

(for example, running from a.s1.s2):

ssh b.s1.s2 xeyes

Can someone help?


Best

Devendra Rai





From: Jeff Squyres 
To: devendra rai ; Open MPI Users 
Sent: Friday, 21 October 2011, 13:14
Subject: Re: [OMPI users] orte_grpcomm_modex failed

This usually means that you have a Open MPI version mismatch between some of 
your nodes.  Meaning: on some nodes, you're finding version X.Y.Z of Open MPI 
by default, but on other nodes, you're finding version A.B.C.


On Oct 21, 2011, at 7:00 AM, devendra rai wrote:

> Hello Community,
>
> I have been struggling with this error for quite some time:
>
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
>  orte_grpcomm_modex failed
>  --> Returned "Data unpack would read past end of buffer" (-26) instead of 
> "Success" (0)
> --
> --
> mpirun has exited due to process rank 1 with PID 18945 on
> node tik35x.ethz.ch exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
>
> I am running this on a cluster and this has started happening only after a 
> recent rebuild of openmpi-1

Re: [OMPI users] Hybrid MPI/Pthreads program behaves differently on two different machines with same hardware

2011-10-24 Thread 吕慧伟
No. There's a difference between "mpirun -np 1 ./my_hybrid_app..." and "mpirun
-np 2 ./...".

Run "mpirun -np 1 ./my_hybrid_app..." will increase the performance with
more number of threads, but run "mpirun -np 2 ./..." decrease the
performance.

--
Huiwei Lv

On Tue, Oct 25, 2011 at 12:00 AM,  wrote:

>
> Date: Mon, 24 Oct 2011 07:14:21 -0600
> From: Ralph Castain 
> Subject: Re: [OMPI users] Hybrid MPI/Pthreads program behaves
>differently on  two different machines with same hardware
> To: Open MPI Users 
> Message-ID: <42c53d0b-1586-4001-b9d2-d77af0033...@open-mpi.org>
> Content-Type: text/plain; charset="utf-8"
>
> Does the difference persist if you run the single process using mpirun? In
> other words, does "mpirun -np 1 ./my_hybrid_app..." behave the same as
> "mpirun -np 2 ./..."?
>
> There is a slight difference in the way procs start when run as singletons.
> It shouldn't make a difference here, but worth testing.
>
> On Oct 24, 2011, at 12:37 AM, ??? wrote:
>
> > Dear List,
> >
> > I have a hybrid MPI/Pthreads program named "my_hybrid_app", this program
> is memory-intensive and take advantage of multi-threading to improve memory
> throughput. I run "my_hybrid_app" on two machines, which have same hardware
> configuration but different OS and GCC. The problem is: when I run
> "my_hybrid_app" with one process, two machines behaves the same: the more
> number of threads, the better the performance; however, when I run
> "my_hybrid_app" with two or more processes. The first machine still increase
> performance with more threads, the second machine degrades in performance
> with more threads.
> >
> > Since running "my_hybrid_app" with one process behaves correctly, I
> suspect my linking to MPI library has some problem. Would somebody point me
> in the right direction? Thanks in advance.
> >
> > Attached are the commandline used, my machine informantion and link
> informantion.
> > p.s. 1: Commandline
> > single process: ./my_hybrid_app 
> > multiple process: mpirun -np 2 ./my_hybrid_app 
> >
> > p.s. 2: Machine Informantion
> > The first machine is CentOS 5.3 with GCC 4.1.2:
> > Target: x86_64-redhat-linux
> > Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
> --infodir=/usr/share/info --enable-shared --enable-threads=posix
> --enable-checking=release --with-system-zlib --enable-__cxa_atexit
> --disable-libunwind-exceptions --enable-libgcj-multifile
> --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk
> --disable-dssi --enable-plugin
> --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre --with-cpu=generic
> --host=x86_64-redhat-linux
> > Thread model: posix
> > gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)
> > The second machine is SUSE Enterprise Server 11 with GCC 4.3.4:
> > Target: x86_64-suse-linux
> > Configured with: ../configure --prefix=/usr --infodir=/usr/share/info
> --mandir=/usr/share/man --libdir=/usr/lib64 --libexecdir=/usr/lib64
> --enable-languages=c,c++,objc,fortran,obj-c++,java,ada
> --enable-checking=release --with-gxx-include-dir=/usr/include/c++/4.3
> --enable-ssp --disable-libssp 
> --with-bugurl=http://bugs.opensuse.org/--with-pkgversion='SUSE Linux' 
> --disable-libgcj --disable-libmudflap
> --with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit
> --enable-libstdcxx-allocator=new --disable-libstdcxx-pch
> --enable-version-specific-runtime-libs --program-suffix=-4.3
> --enable-linux-futex --without-system-libunwind --with-cpu=generic
> --build=x86_64-suse-linux
> > Thread model: posix
> > gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux)
> >
> > p.s. 3: ldd Informantion
> > The first machine:
> > $ ldd my_hybrid_app
> > libm.so.6 => /lib64/libm.so.6 (0x00358d40)
> > libmpi.so.0 => /usr/local/openmpi/lib/libmpi.so.0
> (0x2af0d53a7000)
> > libopen-rte.so.0 => /usr/local/openmpi/lib/libopen-rte.so.0
> (0x2af0d564a000)
> > libopen-pal.so.0 => /usr/local/openmpi/lib/libopen-pal.so.0
> (0x2af0d5895000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x00358d00)
> > libnsl.so.1 => /lib64/libnsl.so.1 (0x00358f00)
> > libutil.so.1 => /lib64/libutil.so.1 (0x00359a60)
> > libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x2af0d5b07000)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x00358d80)
> > libc.so.6 => /lib64/libc.so.6 (0x00358cc0)
> > /lib64/ld-linux-x86-64.so.2 (0x00358c80)
> > librt.so.1 => /lib64/librt.so.1 (0x00358dc0)
> > The second machine:
> > $ ldd my_hybrid_app
> > linux-vdso.so.1 =>  (0x7fff3eb5f000)
> > libmpi.so.0 => /root/opt/openmpi/lib/libmpi.so.0
> (0x7f68627a1000)
> > libm.so.6 => /lib64/libm.so.6 (0x7f686254b000)
> > libopen-rte.so.0 => /root/opt/openmpi/lib/libopen-rte.so.0
> (0x7f68622fc000)
> > libopen-pal.so.0 => /root/opt/openmpi/lib/libopen-pal.so.0
> (0x7f68620a