I have a code that runs with both Portland and Intel compilers on X86, AMD64 and Intel EM64T running various flavors of Linux on clusters. I am trying to port it to a 2-CPU Itanium2 (ia64) running Red Hat Enterprise Linux 4.0; it has gcc 3.4.6-8 and the Intel Fortran compiler 10.0.026 installed. I have built Open MPI 1.2.4 using these compilers.
When I built the Open MPI, I didn't do anything special. I enabled debug, but that was really all. Of course, you can see that in the config file that is attached. This system is not part of a cluster. The two onboard CPUs (an HP zx6000) are the only processors on which the job runs. The code must run on MPI because the source calls it. I compiled the target software using the Fortran90 compiler (mpif90). I've been running the code in the foreground so that I could keep an eye on its behavior. When I try to run the compiled and linked code [mpirun -np # {executable file}], it performs as shown below: (1) With the source compiled at optimization -O0 and -np 1, the job runs very slowly (6 days on the wall clock) to the correct answer on the benchmark; (2) With the source compiled at optimization -O0 and -np 2, the benchmark job fails with a segmentation violation; (3) With the source compiled at all other optimization (-O1, -O2, -O3) and processor combinations (-np1 and -np 2), it fails in what I would call a "quiescent" manner. What I mean by this is that it does not produce any error messages. When I submit the job, it produces a little standard output and it quits after 2-3 seconds. In an attempt to find the problem, the technical support agent at Intel has had me run some simple "Hello" problems. The first one is an MPI hello code that is the attached hello_mpi.f. This ran as expected, and it echoed one "Hello" for each of the two processors. The second one is a non-MPI hello that is the attached hello.f90. Since it is a non-MPI source, I was told that running it on a workstation with a properly configured MPI should only echo one "Hello"; the Intel agent told me that two such echoes indicate a problem with Open MPI. It echoed twice, so now I have come to you for help. The other three attached files are the output requested on the "Getting Help" page - (1) the output of /sbin/ifconfig, (2) the output of ompt_info -all and (3) the config.log file. The installation of the Open MPI itself was as easy as could be. I am really ignorant of how it works beyond what I've read from the FAQs and learned in a little digging, so I hope it's a simple solution. I don't recognize the network types, so I assume TCP-based is what I have. I look forward to your help.
hello.f90
Description: hello.f90
[tedb@Checkers ~]$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:30:6E:39:76:D9 inet addr:10.1.50.4 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::230:6eff:fe39:76d9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:165224 errors:0 dropped:0 overruns:0 frame:0 TX packets:23461 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34744040 (33.1 MiB) TX bytes:4991738 (4.7 MiB) Interrupt:56 eth1 Link encap:Ethernet HWaddr 00:30:6E:39:87:DA inet addr:10.1.50.6 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::230:6eff:fe39:87da/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:125036 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11073091 (10.5 MiB) TX bytes:2184 (2.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1787 errors:0 dropped:0 overruns:0 frame:0 TX packets:1787 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2774903 (2.6 MiB) TX bytes:2774903 (2.6 MiB)
ompi_info_Output.txt.gz
Description: ompi_info_Output.txt.gz
config.log.gz
Description: config.log.gz
hello_mpi.f
Description: hello_mpi.f