[OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Eric Chamberland via users
Hi, I am looking around for information about parallel filesystems supported for MPI I/O. Clearly, GFPS, Lustre are fully supported, but what about others? - CephFS - pNFS - Other? when I "grep" for "pnfs\|cephfs" into ompi source code, I found nothing... Otherwise I found this into ompi/

Re: [OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Gabriel, Edgar via users
Eric, generally speaking, ompio should be able to operate correctly on all file systems that have support for POSIX functions. The generic ufs component is for example being used on BeeGFS parallel file systems without problems, we are using that on a daily basis. For GPFS, the only reason we

Re: [OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Eric Chamberland via users
Thanks for your answer Edgard! In fact, we are able to use NFS and certainly any POSIX file system on a single node basis. I should have been asking for: What are the supported file systems for *multiple nodes* read/write access to files? For nfs, MPI I/O is known to *not* work on NFS when

Re: [OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Gabriel, Edgar via users
-Original Message- From: Eric Chamberland Thanks for your answer Edgard! In fact, we are able to use NFS and certainly any POSIX file system on a single node basis. I should have been asking for: What are the supported file systems for *multiple nodes* read/write access to files? ->

Re: [OMPI users] Status of pNFS, CephFS and MPI I/O

2021-09-23 Thread Gabriel, Edgar via users
let me amend my last email by making clear that I do not recommend using NFS for parallel I/O. But if you have to, make sure your code does not do things like read-after-write, or multiple processes writing data that ends up in the same file system block (can often be avoided by using collective