Thanks you for your replay.

As conclusion I summarrised these things here, for another future users.

in OpenMPI 1.8.4 is MPIIO supported by both modules ROMIO and OMPIO and both 
modules support OrangeFS 2.8.8 (as PVFS2).
OpenMPI should be compilated with :
./configure --prefix=/opt/modules/openmpi-1.8.4 --with-sge --with-psm 
--with-pvfs2=/opt/orangefs 
--with-io-romio-flags='--with-file-system=pvfs2+ufs+nfs 
--with-pvfs2=/opt/orangefs'

When you are using ROMIO module (--mca io romio), then PVFS2 filesystem doesnt 
need to be mounted, but you need use prefix "pvfs2:" in filename(i.e. 
"pvfs2:/path_to_data/filename"). If PVFS2 is mounted, prefix is not needed.

When you are using OMPIO module (--mca io ompio), then PVFS2 filesystem must be 
mounted. OMPIO will use PVFS2 directly. Mounted PVFS2 is used for decision on 
what filesystem is the file placed. The prefix in filename is not supported.

Thanks
Hanousek Vít

---------- Původní zpráva ----------
Od: Rob Latham 
Komu: us...@open-mpi.org
Datum: 25. 2. 2015 16:54:12
Předmět: Re: [OMPI users] MPIIO and OrangeFS

On 02/25/2015 02:01 AM, vithanousek wrote:

> Do you know how to use OMPIO without mounting pvfs2? if I tryed the same 
> filename format as in ROMIO I got "MPI_ERR_FILE: invalid file".
> If I use normal filename format ("/mountpoint/filename") and force use of 
> pvfs2 by using  --mca io ompio --mca fs pvfs2, then my app fails with
> mca_fs_base_file_select() failed (and backtrace).

Sorry, I forgot to mention the importance (to ROMIO) of the file system 
prefix.  ROMIO can detect file systems two ways:
- either by using stat
- or by consulting a "file system prefix"

For PVFS2 or OrangeFS, prefixing the file name with 'pvfs2:' will tell 
ROMIO "treat this file like a PVFS2 file", and ROMIO will use the 
"system interface" to PVFS2/OrangeFS.

This file system prefix is described in the MPI standard and has proven 
useful in many situations.

Edgar has drawn the build error of OMPI-master to my attention.  I'll 
get that fixed straightaway.
==rob

>
> At OrangeFS documentation (http://docs.orangefs.com/v_2_8_8/index.htm) is 
> chapter about using ROMIO, and it says, that i shoud compile apps with 
> -lpvfs2. I have tryed it, but nothing change (ROMIO works with special 
> filename format, OMPIO doesnt work)
>
> Thanks for your help. If you point me to some usefull documentation, I will 
> be happy.
> Hanousek Vít
>
>
> ---------- Původní zpráva ----------
> Od: Rob Latham
> Komu: us...@open-mpi.org, vithanou...@seznam.cz
> Datum: 24. 2. 2015 22:10:08
> Předmět: Re: [OMPI users] MPIIO and OrangeFS
>
> On 02/24/2015 02:00 PM, vithanousek wrote:
>> Hello,
>>
>> Im not sure if I have my OrangeFS (2.8.8) and OpenMPI (1.8.4) set up 
>> corectly. One short questin?
>>
>> Is it needed to have OrangeFS  mounted  through kernel module, if I want use 
>> MPIIO?
>
> nope!
>
>> My simple MPIIO hello world program doesnt work, If i havent mounted 
>> OrangeFS. When I mount OrangeFS, it works. So I'm not sure if OMPIO (or 
>> ROMIO) is using pvfs2 servers directly or if it is using kernel module.
>>
>> Sorry for stupid question, but I didnt find any documentation about it.
>
> http://www.pvfs.org/cvs/pvfs-2-8-branch-docs/doc/pvfs2-quickstart/pvfs2-quickstart.php#sec:romio
>
> It sounds like you have not configured your MPI implementation with
> PVFS2 support (OrangeFS is a re-branding of PVFS2, but as far as MPI-IO
> is concerned, they are the same).
>
> OpenMPI passes flags to romio like this at configure time:
>
>    --with-io-romio-flags="--with-file-system=pvfs2+ufs+nfs"
>
> I'm not sure how OMPIO takes flags.
>
> If pvfs2-ping and pvfs2-cp and pvfs2-ls work, then you can bypass the
> kernel.
>
> also, please check return codes:
>
> http://stackoverflow.com/questions/22859269/what-do-mpi-io-error-codes-mean/26373193#26373193
>
> ==rob
>
>
>> Thanks for replays
>> Hanousek Vít
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2015/02/26382.php
>>
>

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/02/26398.php

Reply via email to