Hi Gus: I use yum install to install openmpi 1.4.7 and it went through. Then I tested a small code, hello world, and it worked. Now I have two questions for you:
1. do we have a openmpi 1.6.5 rpm package so I can use rpm or yum to install? 2. do you know how to specify the directory to install openmpi when using yum install? Thank you. Best regards, Yuping -----Original Message----- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa Sent: Thursday, July 25, 2013 7:22 PM To: Open MPI Users Subject: Re: [OMPI users] errors testing openmpi1.6.5 ---- Hi Yuping Something seems to be broken in the way you set your environment variables. We use .bashrc/.tcshrc for this. For what is worth, the bash man page says: *** "When bash is invoked as an interactive login shell, or as a non-inter- active shell with the --login option, it first reads and executes com- mands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits, bash reads and executes commands from the file ~/.bash_logout, if it exists. When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc. " *** Notice the difference between interactive login shells, and interactive shell that is not a login shell. However, it is hard to guess what it is going on. We (the OMPI list) don't even know if you are running the job in a single machine, or on a cluster, or on a few networked workstations. It would help if you tell more. Anyway, you could try to pass along the environment variables the mpiexec command line. Something like this: *** export PATH=... (whatever you use) export LD_LIBRARY_PATH=... (whatever you use) mpiexec -x PATH -x LD_LIBRARY_PATH -np 4 ./my_program *** See 'man mpiexec" for details. ** In any case, in my humble opinion, the best approach is to fix for good the problems with your environment. They will haunt you sooner or later. I hope this helps, Gus Correa On 07/25/2013 05:58 PM, Yuping Sun wrote: > > Hi Gus: > > Thank you. I did these as I use .bash_profile to add the path and LD, but it > did not help. Thank you. > Is there anything else you can think of? > > Best regards, > > Yuping > > -----Original Message----- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] > On Behalf Of Gus Correa > Sent: Thursday, July 25, 2013 4:51 PM > To: Open MPI Users > Subject: Re: [OMPI users] errors testing openmpi1.6.5 ---- > > Hi Yuping > > A simple way to do it is to put in your initialization files, which are > hidden ("dot files") in your home directory. > > It depends on the shell you use (do 'echo $SHELL' to see). > > If bash, > .bashrc > > export PATH=/usr/local/openmpi1.6.5/bin:${PATH} > export LD_LIBRARY_PATH=/usr/local/openmpi1.6.5/lib:${LD_LIBRARY_PATH} > > If csh/tcsh > .cshrc/.tcshrc > > setenv PATH /usr/local/openmpi1.6.5/bin:${PATH} > setenv LD_LIBRARY_PATH /usr/local/openmpi1.6.5/lib:${LD_LIBRARY_PATH} > > Are you running the programs in a single machine, or is it a cluster or some > farm or networked machines? > In the second case, you need to make sure your home directory is shared > across the machines, or if it is not shared, you need to make the > modifications above in each machine. > > I hope this helps. > > Gus Correa > > On 07/25/2013 03:11 PM, Yuping Sun wrote: >> Hi Gus: >> >> I went back and set the PATH and LD_LIBRARY_PATH following the FAQ answer. >> However, it did not change anything. >> >> What else can you suggest? >> >> Thank you. >> >> Yuping >> >> -----Original Message----- >> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] >> On Behalf Of Gus Correa >> Sent: Wednesday, July 24, 2013 8:28 PM >> To: Open MPI Users >> Subject: Re: [OMPI users] errors testing openmpi1.6.5 ---- >> >> Hi Yuping >> >> Did you set your PATH and LD_LIBRARY_PATH? >> Please, see these FAQ: >> http://www.open-mpi.org/faq/?category=running#run-prereqs >> http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path >> >> I hope this helps, >> Gus Correa >> >> On 07/24/2013 08:09 PM, Yuping Sun wrote: >>> Dear All: >>> >>> I downloaded openmpi1.5.6 and installed on my linux workstation with >>> the help of NASA engineer. Then I tried to test the openmpi >>> installation, but get the following error message: >>> >>> [ysun@ysunrh mpi]$ which mpiexec >>> >>> /usr/local/openmpi1.6.5/bin/mpiexec >>> >>> [ysun@ysunrh mpi]$ mpiexec utils/MPIcheck/mpicheck >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_paffinity_hwloc: >>> lt_dlerror() returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_carto_auto_detect: >>> lt_dlerror() returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_carto_file: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_shmem_mmap: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_shmem_posix: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24905] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_shmem_sysv: lt_dlerror() >>> returned NULL! (ignored) >>> >>> -------------------------------------------------------------------- >>> - >>> - >>> ---- >>> >>> It looks like opal_init failed for some reason; your parallel >>> process is >>> >>> likely to abort. There are many reasons that a parallel process can >>> >>> fail during opal_init; some of which are due to configuration or >>> >>> environment problems. This failure appears to be an internal >>> failure; >>> >>> here's some additional information (which may only be relevant to an >>> >>> Open MPI developer): >>> >>> opal_shmem_base_select failed >>> >>> --> Returned value -1 instead of OPAL_SUCCESS >>> >>> -------------------------------------------------------------------- >>> - >>> - >>> ---- >>> >>> [ysunrh.fdwt.com:24905] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in >>> file runtime/orte_init.c at line 79 >>> >>> [ysunrh.fdwt.com:24905] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in >>> file orterun.c at line 694 >>> >>> I also try the use ompi_info to get more information, and it give me >>> a lot error messages and I listed some below: >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_osc_pt2pt: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_osc_rdma: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_btl_self: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_btl_sm: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_btl_tcp: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_topo_unity: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_pubsub_orte: lt_dlerror() >>> returned NULL! (ignored) >>> >>> [ysunrh.fdwt.com:24920] mca: base: component_find: unable to open >>> /usr/local/openmpi1.6.5/lib/openmpi/mca_dpm_orte: lt_dlerror() >>> returned NULL! (ignored) >>> >>> MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> MCA memory: linux (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> MCA timer: linux (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> MCA hwloc: hwloc132 (MCA v2.0, API v2.0, Component v1.6.5) >>> >>> Could anyone of you give me some help, and tell me what I need to do >>> to install openmpi correctly or give me some instructions to make it >>> working? Thank you. >>> >>> */Yuping Sun/* >>> >>> >>> FloDesign Wind Turbine Corp >>> >>> 242 Sturbridge Road >>> >>> Charlton, MA 01507 >>> Direct: 508-434-1507 >>> >>> Cell: 713-456-9420 >>> y...@fdwt.com<mailto:y...@fdwt.com> >>> >>> Description: cid:3300779197_294562 >>> >>> >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users