I get this on rh9 ONLY if I leave out a -hostfile <file> option
on mpirun, otherwise it works fine.
This is an old Red Hat.

Regards,
Mostyn

On Wed, 16 Nov 2005, Jeff Squyres wrote:

Clement --

Sorry for the delay in replying.  We're running around crazy here at
SC, which pretty much keeps us away from e-mail except early in the
morning and late at night.

We fixed a bunch of things in the sm btl as of r8136 (someone reported
similar issues as you, and we took the exchange off-list to fix).  The
problems could definitely affect correctness and cause segv's similar
to what you were seeing (see
http://www.open-mpi.org/community/lists/users/2005/11/0326.php for a
little more info).

I notice that you're running 8113 here -- could you try the latest
nightly snapshot or rc and see if the same problems occur?

Thanks for your patience!


On Nov 14, 2005, at 4:51 AM, Clement Chu wrote:

Hi Jeff,

   I tried the rc6 and trunk nightly 8150.  I got the same problem.  I
copied the message from terminal as below.

[clement@localhost testmpi]$ ompi_info
               Open MPI: 1.1a1r8113
  Open MPI SVN revision: r8113
               Open RTE: 1.1a1r8113
  Open RTE SVN revision: r8113
                   OPAL: 1.1a1r8113
      OPAL SVN revision: r8113
                 Prefix: /home/clement/openmpi/
Configured architecture: i686-pc-linux-gnu
          Configured by: clement
          Configured on: Mon Nov 14 10:12:12 EST 2005
         Configure host: localhost
               Built by: clement
               Built on: Mon Nov 14 10:28:21 EST 2005
             Built host: localhost
             C bindings: yes
           C++ bindings: yes
     Fortran77 bindings: yes (all)
     Fortran90 bindings: yes
             C compiler: gcc
    C compiler absolute: /usr/bin/gcc
           C++ compiler: g++
  C++ compiler absolute: /usr/bin/g++
     Fortran77 compiler: gfortran
 Fortran77 compiler abs: /usr/bin/gfortran
     Fortran90 compiler: gfortran
 Fortran90 compiler abs: /usr/bin/gfortran
            C profiling: yes
          C++ profiling: yes
    Fortran77 profiling: yes
    Fortran90 profiling: yes
         C++ exceptions: no
         Thread support: posix (mpi: no, progress: no)
 Internal debug support: no
    MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
        libltdl support: 1
             MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component
v1.1)
          MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1)
          MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.1)
              MCA timer: linux (MCA v1.0, API v1.0, Component v1.1)
          MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
          MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
               MCA coll: basic (MCA v1.0, API v1.0, Component v1.1)
               MCA coll: hierarch (MCA v1.0, API v1.0, Component v1.1)
               MCA coll: self (MCA v1.0, API v1.0, Component v1.1)
               MCA coll: sm (MCA v1.0, API v1.0, Component v1.1)
                 MCA io: romio (MCA v1.0, API v1.0, Component v1.1)
              MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1)
                MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1)
                MCA pml: teg (MCA v1.0, API v1.0, Component v1.1)
                MCA pml: uniq (MCA v1.0, API v1.0, Component v1.1)
                MCA ptl: self (MCA v1.0, API v1.0, Component v1.1)
                MCA ptl: sm (MCA v1.0, API v1.0, Component v1.1)
                MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.1)
                MCA btl: self (MCA v1.0, API v1.0, Component v1.1)
                MCA btl: sm (MCA v1.0, API v1.0, Component v1.1)
                MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
               MCA topo: unity (MCA v1.0, API v1.0, Component v1.1)
                MCA gpr: null (MCA v1.0, API v1.0, Component v1.1)
                MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1)
                MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1)
                MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1)
                MCA iof: svc (MCA v1.0, API v1.0, Component v1.1)
                 MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1)
                 MCA ns: replica (MCA v1.0, API v1.0, Component v1.1)
                MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA ras: dash_host (MCA v1.0, API v1.0, Component v1.1)
                MCA ras: hostfile (MCA v1.0, API v1.0, Component v1.1)
                MCA ras: localhost (MCA v1.0, API v1.0, Component v1.1)
                MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1)
                MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.1)
                MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1)
              MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
v1.1)
               MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1)
               MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1)
                MCA rml: oob (MCA v1.0, API v1.0, Component v1.1)
                MCA pls: fork (MCA v1.0, API v1.0, Component v1.1)
                MCA pls: proxy (MCA v1.0, API v1.0, Component v1.1)
                MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1)
                MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1)
                MCA sds: env (MCA v1.0, API v1.0, Component v1.1)
                MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1)
                MCA sds: seed (MCA v1.0, API v1.0, Component v1.1)
                MCA sds: singleton (MCA v1.0, API v1.0, Component v1.1)
                MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1)
[clement@localhost testmpi]$
<http://www.open-mpi.org/nightly/trunk/openmpi-1.1a1r8150.tar.gz>

It works if there is only 1 process.

[clement@localhost testmpi]$ mpirun -np 1 cpi
Process 0 on localhost
pi is approximately 3.1416009869231254, Error is 0.0000083333333323
wall clock time = 0.000469
[clement@localhost testmpi]$


When I tried two processes, I got the following problem.

[clement@localhost testmpi]$ mpirun -np 2 cpi
mpirun noticed that job rank 1 with PID 3299 on node "localhost"
exited on signal 11.
1 process killed (possibly by Open MPI)


I tried backcore.

[clement@localhost testmpi]$ ls
core.3299  cpi  cpi.c

[clement@localhost testmpi]$ file core.3299
core.3299: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV),
SVR4-style, SVR4-style, from 'cpi'

[clement@localhost testmpi]$ gdb cpi core.3299
GNU gdb Red Hat Linux (6.3.0.0-1.21rh)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and
you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for
details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host
libthread_db library "/lib/libthread_db.so.1".

Reading symbols from shared object read from target memory...done.
Loaded system supplied DSO at 0xf3f000
Core was generated by `cpi'.
Program terminated with signal 11, Segmentation fault.

warning: svr4_current_sos: Can't read pathname for load map:
Input/output error

Reading symbols from /home/clement/openmpi/lib/libmpi.so.0...done.
Loaded symbols for /home/clement/openmpi/lib/libmpi.so.0
Reading symbols from /home/clement/openmpi/lib/liborte.so.0...done.
Loaded symbols for /home/clement/openmpi/lib/liborte.so.0
Reading symbols from /home/clement/openmpi/lib/libopal.so.0...done.
Loaded symbols for /home/clement/openmpi/lib/libopal.so.0
Reading symbols from /lib/libutil.so.1...done.
Loaded symbols for /lib/libutil.so.1
Reading symbols from /lib/libnsl.so.1...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libm.so.6...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libpthread.so.0...done.
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_paffinity_linux.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_paffinity_linux.so
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ns_proxy.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_ns_proxy.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ns_replica.so...done.Loaded
symbols for /home/clement/openmpi//lib/openmpi/mca_ns_replica.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rml_oob.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_rml_oob.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_oob_tcp.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_oob_tcp.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_gpr_null.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_gpr_null.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_gpr_proxy.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_gpr_proxy.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_gpr_replica.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_gpr_replica.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rmgr_proxy.so...done.Loaded
symbols for /home/clement/openmpi//lib/openmpi/mca_rmgr_proxy.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rmgr_urm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_rmgr_urm.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rds_hostfile.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_rds_hostfile.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rds_resfile.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_rds_resfile.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ras_dash_host.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_ras_dash_host.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ras_hostfile.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_ras_hostfile.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ras_localhost.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_ras_localhost.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ras_slurm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_ras_slurm.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rmaps_round_robin.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_rmaps_round_robin.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_pls_fork.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_pls_fork.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_pls_proxy.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_pls_proxy.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_pls_rsh.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_pls_rsh.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_pls_slurm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_pls_slurm.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_iof_proxy.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_iof_proxy.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_allocator_basic.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_allocator_basic.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_allocator_bucket.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_allocator_bucket.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_rcache_rb.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_rcache_rb.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_mpool_sm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_mpool_sm.so
Reading symbols from
/home/clement/openmpi/lib/libmca_common_sm.so.0...done.
Loaded symbols for /home/clement/openmpi//lib/libmca_common_sm.so.0
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_pml_ob1.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_pml_ob1.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_bml_r2.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_bml_r2.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_btl_self.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_btl_self.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_btl_sm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_btl_sm.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_btl_tcp.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_btl_tcp.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ptl_self.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_ptl_self.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ptl_sm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_ptl_sm.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_ptl_tcp.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_ptl_tcp.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_coll_basic.so...done.Loaded
symbols for /home/clement/openmpi//lib/openmpi/mca_coll_basic.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_coll_hierarch.so...done.
Loaded symbols for
/home/clement/openmpi//lib/openmpi/mca_coll_hierarch.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_coll_self.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_coll_self.so
Reading symbols from
/home/clement/openmpi/lib/openmpi/mca_coll_sm.so...done.
Loaded symbols for /home/clement/openmpi//lib/openmpi/mca_coll_sm.so
#0  0x00324a60 in mca_btl_sm_add_procs_same_base_addr (btl=0x328200,
nprocs=2,
   procs=0x95443a8, peers=0x95443d8, reachability=0xbf993994) at
btl_sm.c:412
412             mca_btl_sm_component.sm_ctl_header->segment_header.
(gdb) where
#0  0x00324a60 in mca_btl_sm_add_procs_same_base_addr (btl=0x328200,
nprocs=2,
   procs=0x95443a8, peers=0x95443d8, reachability=0xbf993994) at
btl_sm.c:412
#1  0x00365fad in mca_bml_r2_add_procs (nprocs=2, procs=0x95443a8,
   bml_endpoints=0x9544388, reachable=0xbf993994) at bml_r2.c:220
#2  0x007ba346 in mca_pml_ob1_add_procs (procs=0x9544378, nprocs=2)
   at pml_ob1.c:131
#3  0x00d3df0b in ompi_mpi_init (argc=1, argv=0xbf993c74, requested=0,
   provided=0xbf993a44) at runtime/ompi_mpi_init.c:396
#4  0x00d59ab8 in PMPI_Init (argc=0xbf993bf0, argv=0xbf993bf4) at
pinit.c:71
#5  0x08048904 in main (argc=1, argv=0xbf993c74) at cpi.c:20
(gdb)


I attached the mpi program.  I do hope you can help me.  Many thanks.


Clement



Jeff Squyres wrote:

One minor thing that I notice in your ompi_info output -- your build
and run machines are different (kfc vs. clement).

Are these both FC4 machines, or are they different OS's/distros?


On Nov 10, 2005, at 10:01 AM, Clement Chu wrote:


[clement@kfc TestMPI]$ mpirun -d -np 2 test
[kfc:29199] procdir: (null)
[kfc:29199] jobdir: (null)
[kfc:29199] unidir:
/tmp/openmpi-sessions-clement@kfc_0/default-universe
[kfc:29199] top: openmpi-sessions-clement@kfc_0
[kfc:29199] tmp: /tmp
[kfc:29199] [0,0,0] setting up session dir with
[kfc:29199]     tmpdir /tmp
[kfc:29199]     universe default-universe-29199
[kfc:29199]     user clement
[kfc:29199]     host kfc
[kfc:29199]     jobid 0
[kfc:29199]     procid 0
[kfc:29199] procdir:
/tmp/openmpi-sessions-clement@kfc_0/default-universe-29199/0/0
[kfc:29199] jobdir:
/tmp/openmpi-sessions-clement@kfc_0/default-universe-29199/0
[kfc:29199] unidir:
/tmp/openmpi-sessions-clement@kfc_0/default-universe-29199
[kfc:29199] top: openmpi-sessions-clement@kfc_0
[kfc:29199] tmp: /tmp
[kfc:29199] [0,0,0] contact_file
/tmp/openmpi-sessions-clement@kfc_0/default-universe-29199/universe-
setup.txt
[kfc:29199] [0,0,0] wrote setup file
[kfc:29199] pls:rsh: local csh: 0, local bash: 1
[kfc:29199] pls:rsh: assuming same remote shell as local shell
[kfc:29199] pls:rsh: remote csh: 0, remote bash: 1
[kfc:29199] pls:rsh: final template argv:
[kfc:29199] pls:rsh:     ssh <template> orted --debug --bootproxy 1
--name <template> --num_procs 2 --vpid_start 0 --nodename <template>
--universe clement@kfc:default-universe-29199 --nsreplica
"0.0.0;tcp://192.168.11.101:32784" --gprreplica
"0.0.0;tcp://192.168.11.101:32784" --mpi-call-yield 0
[kfc:29199] pls:rsh: launching on node localhost
[kfc:29199] pls:rsh: oversubscribed -- setting mpi_yield_when_idle
to 1
(1 2)
[kfc:29199] sess_dir_finalize: proc session dir not empty - leaving
[kfc:29199] spawn: in job_state_callback(jobid = 1, state = 0xa)
mpirun noticed that job rank 1 with PID 0 on node "localhost" exited
on
signal 11.
[kfc:29199] sess_dir_finalize: proc session dir not empty - leaving
[kfc:29199] spawn: in job_state_callback(jobid = 1, state = 0x9)
[kfc:29199] ERROR: A daemon on node localhost failed to start as
expected.
[kfc:29199] ERROR: There may be more information available from
[kfc:29199] ERROR: the remote shell (see above).
[kfc:29199] The daemon received a signal 11.
1 additional process aborted (not shown)
[kfc:29199] sess_dir_finalize: found proc session dir empty -
deleting
[kfc:29199] sess_dir_finalize: found job session dir empty - deleting
[kfc:29199] sess_dir_finalize: found univ session dir empty -
deleting
[kfc:29199] sess_dir_finalize: top session dir not empty - leaving


opmi_info output message:

[clement@kfc TestMPI]$ ompi_info
               Open MPI: 1.0rc5r8053
  Open MPI SVN revision: r8053
               Open RTE: 1.0rc5r8053
  Open RTE SVN revision: r8053
                   OPAL: 1.0rc5r8053
      OPAL SVN revision: r8053
                 Prefix: /home/clement/openmpi
Configured architecture: i686-pc-linux-gnu
          Configured by: clement
          Configured on: Fri Nov 11 00:37:23 EST 2005
         Configure host: kfc
               Built by: clement
               Built on: Fri Nov 11 00:59:26 EST 2005
             Built host: kfc
             C bindings: yes
           C++ bindings: yes
     Fortran77 bindings: yes (all)
     Fortran90 bindings: yes
             C compiler: gcc
    C compiler absolute: /usr/bin/gcc
           C++ compiler: g++
  C++ compiler absolute: /usr/bin/g++
     Fortran77 compiler: gfortran
 Fortran77 compiler abs: /usr/bin/gfortran
     Fortran90 compiler: gfortran
 Fortran90 compiler abs: /usr/bin/gfortran
            C profiling: yes
          C++ profiling: yes
    Fortran77 profiling: yes
    Fortran90 profiling: yes
         C++ exceptions: no
         Thread support: posix (mpi: no, progress: no)
 Internal debug support: no
    MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
        libltdl support: 1
             MCA memory: malloc_hooks (MCA v1.0, API v1.0, Component
 v1.0)
          MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.0)
          MCA maffinity: first_use (MCA v1.0, API v1.0, Component
v1.0)
              MCA timer: linux (MCA v1.0, API v1.0, Component v1.0)
          MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
          MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
               MCA coll: basic (MCA v1.0, API v1.0, Component v1.0)
               MCA coll: self (MCA v1.0, API v1.0, Component v1.0)
               MCA coll: sm (MCA v1.0, API v1.0, Component v1.0)
                 MCA io: romio (MCA v1.0, API v1.0, Component v1.0)
              MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0)
                MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0)
                MCA pml: teg (MCA v1.0, API v1.0, Component v1.0)
                MCA pml: uniq (MCA v1.0, API v1.0, Component v1.0)
                MCA ptl: self (MCA v1.0, API v1.0, Component v1.0)
                MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0)
                MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA btl: self (MCA v1.0, API v1.0, Component v1.0)
                MCA btl: sm (MCA v1.0, API v1.0, Component v1.0)
                MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
               MCA topo: unity (MCA v1.0, API v1.0, Component v1.0)
                MCA gpr: null (MCA v1.0, API v1.0, Component v1.0)
                MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0)
                MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0)
                MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0)
                MCA iof: svc (MCA v1.0, API v1.0, Component v1.0)
                 MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0)
                 MCA ns: replica (MCA v1.0, API v1.0, Component v1.0)
                MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA ras: dash_host (MCA v1.0, API v1.0, Component
v1.0)
                MCA ras: hostfile (MCA v1.0, API v1.0, Component
v1.0)
                MCA ras: localhost (MCA v1.0, API v1.0, Component
v1.0)
                MCA ras: slurm (MCA v1.0, API v1.0, Component v1.0)
                MCA rds: hostfile (MCA v1.0, API v1.0, Component
v1.0)
                MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0)
              MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
v1.0)
               MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0)
               MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0)
                MCA rml: oob (MCA v1.0, API v1.0, Component v1.0)
                MCA pls: fork (MCA v1.0, API v1.0, Component v1.0)
                MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0)
                MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0)
                MCA pls: slurm (MCA v1.0, API v1.0, Component v1.0)
                MCA sds: env (MCA v1.0, API v1.0, Component v1.0)
                MCA sds: pipe (MCA v1.0, API v1.0, Component v1.0)
                MCA sds: seed (MCA v1.0, API v1.0, Component v1.0)
                MCA sds: singleton (MCA v1.0, API v1.0, Component
v1.0)
                MCA sds: slurm (MCA v1.0, API v1.0, Component v1.0)
[clement@kfc TestMPI]$





--
Clement Kam Man Chu
Research Assistant
School of Computer Science & Software Engineering
Monash University, Caulfield Campus
Ph: 61 3 9903 1964

#include "mpi.h"
#include <stdio.h>
#include <math.h>

double f( double );
double f( double a )
{
    return (4.0 / (1.0 + a*a));
}

int main( int argc, char *argv[])
{
    int done = 0, n, myid, numprocs, i;
    double PI25DT = 3.141592653589793238462643;
    double mypi, pi, h, sum, x;
    double startwtime = 0.0, endwtime;
    int  namelen;
    char processor_name[MPI_MAX_PROCESSOR_NAME];

    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
    MPI_Get_processor_name(processor_name,&namelen);

    fprintf(stderr,"Process %d on %s\n", myid, processor_name);

    n = 0;
    while (!done)
    {
        if (myid == 0)
        {
/*
            printf("Enter the number of intervals: (0 quits) ");
            scanf("%d",&n);
*/
            if (n==0) n=100; else n=0;

            startwtime = MPI_Wtime();
        }

        MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
        if (n == 0) {
            done = 1;
        }
        else
        {
            h   = 1.0 / (double) n;
            sum = 0.0;
            for (i = myid + 1; i <= n; i += numprocs)
            {
                x = h * ((double)i - 0.5);
                sum += f(x);
            }
            mypi = h * sum;

            MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0,
MPI_COMM_WORLD);

            if (myid == 0)
            {
                printf("pi is approximately %.16f, Error is %.16f\n",
pi, fabs(pi - PI25DT));
                endwtime = MPI_Wtime();
                printf("wall clock time = %f\n", endwtime-startwtime);
            }
        }
    }
    MPI_Finalize();

    return 0;
}

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to