collective I/O for
example).
-Original Message-
From: users On Behalf Of Gabriel, Edgar via
users
Sent: Thursday, September 23, 2021 5:31 PM
To: Eric Chamberland ; Open MPI Users
Cc: Gabriel, Edgar ; Louis Poirel
; Vivien Clauzon
Subject: Re: [OMPI users] Status of pNFS, CephFS and MPI I/O
; -Original Message-
> From: users On Behalf Of Eric
> Chamberland via users
> Sent: Thursday, September 23, 2021 9:28 AM
> To: Open MPI Users
> Cc: Eric Chamberland ; Vivien
> Clauzon
> Subject: [OMPI users] Status of pNFS, CephFS and MPI I/O
>
> Hi,
>
>
From: users On Behalf Of Eric Chamberland
via users
Sent: Thursday, September 23, 2021 9:28 AM
To: Open MPI Users
Cc: Eric Chamberland ; Vivien Clauzon
Subject: [OMPI users] Status of pNFS, CephFS and MPI I/O
Hi,
I am looking around for information about parallel filesystems supported for
MPI I
via users
Sent: Thursday, September 23, 2021 9:28 AM
To: Open MPI Users
Cc: Eric Chamberland ; Vivien Clauzon
Subject: [OMPI users] Status of pNFS, CephFS and MPI I/O
Hi,
I am looking around for information about parallel filesystems supported for
MPI I/O.
Clearly, GFPS, Lustre are fully
Hi,
I am looking around for information about parallel filesystems supported
for MPI I/O.
Clearly, GFPS, Lustre are fully supported, but what about others?
- CephFS
- pNFS
- Other?
when I "grep" for "pnfs\|cephfs" into ompi source code, I found nothing...
Otherwise I found this into ompi/