Thanks you for your replay.
As conclusion I summarrised these things here, for another future users.
in OpenMPI 1.8.4 is MPIIO supported by both modules ROMIO and OMPIO and both
modules support OrangeFS 2.8.8 (as PVFS2).
OpenMPI should be compilated with :
./configure --prefix=/opt/modules/openm
Thanks very much for your replay, which clear all my problems.
For your Information, I have OpenMPI 1.8.4 with OrangeFS 2.8.8 support, through
both OMPIO and ROMIO.
Now Im sure it works corectly. But I havent made any deep tests.
I'm not sure if it is usefull to use pvfs2 without mounting, but i
Okay, it sounds like the problem is that some nodes have numactl installed, and
thus can perform binding, and some don’t. It also sounds like you’d prefer to
not bind your procs at all as they are multi-threaded, and you want to have as
many procs on a node as you do slots. You clearly recognize
--bind-to none worked, ran just fine. Additionally –hetero-nodes also worked
without error. However hetero-nodes didn’t allow threading properly while
bind-to none did.
Is this the best option forward, adding that on all mpirun command lines or
setting some system variables? Or alternatively
On 02/25/2015 02:01 AM, vithanousek wrote:
Do you know how to use OMPIO without mounting pvfs2? if I tryed the same filename format
as in ROMIO I got "MPI_ERR_FILE: invalid file".
If I use normal filename format ("/mountpoint/filename") and force use of pvfs2
by using --mca io ompio --mca f
Two separate comments.
1. I do not know the precise status of the PVFS2 support in 1.8 series
of Open MPI for ROMIO, I haven't tested it in a while. On master, I know
that there is a compilation problem with PVFS2 and ROMIO on Open MPI and
I am about to submit a report/question to ROMIO about
hi Jeff, this was very helpful. Thank you very much indeed. Only tweak is that
I had to type
sudo rm .rf/usr/local/lib/openmpi
I reinstalled openmpi and the code is now running seamless. Thanks again.
Javier
On 25 Feb 2015, at 14:29, Jeff Squyres (jsquyres) wrote:
> If you had an older Op
If you had an older Open MPI installed into /usr/local before you installed
Open MPI 1.8.4 into /usr/local, it's quite possible that some of the older
plugins are still there (and will not play nicely with the 1.8.4 install).
Specifically: installing a new Open MPI does not uninstall an older Op
I have a fresh install of openmpi-1.8.4 in a Mac with OSX-10.9.5. It compiled
and installed fine.
I have a Fortran code that runs perfectly on another similar machine with
openmpi-1.6.5. It compiled
without error in the new Mac. When I want to mpirun, it gives the following
message below.
Thanks for your repaly!
I checked my configuration parametrs and it seem, that everything is correct:
./configure --prefix=/opt/modules/openmpi-1.8.4 --with-sge --with-psm
--with-pvfs2=/opt/orangefs
--with-io-romio-flags='--with-file-system=pvfs2+ufs+nfs
--with-pvfs2=/opt/orangefs'
I have adde
10 matches
Mail list logo