Hi John,
Thanks for looking at the code. Right for the loop, but this clearly will not affect the performance on IO. About concurrent write jobs: i am experiencing very low rate even with only 1 MPI process ( so no concurrency at all ). About the array size not being a multiple of stripe size, this is affecting the performance in case i have many MPI writing process, right ? Best, Denis ________________________________ From: lustre-discuss <[email protected]> on behalf of John Bauer <[email protected]> Sent: Thursday, February 3, 2022 5:16:52 PM To: [email protected] Subject: Re: [lustre-discuss] lustre-discuss Digest, Vol 191, Issue 2 The following loop in wdfile.f90 is pointless as the write happens only once for each rank. Each rank is writing out the array once and then closing the file. If the size of array 'data' is not a multiple of the Lustre stripe size there is going to be a lot of read-modify-write going on. do ii = 0, size if ( rank == ii ) then !start= MPI_Wtime() write(unit=iounit) data(1:nx, 1:ny, 1:nz) close(iounit) !finish = MPI_Wtime() !write(6,'(i5,f7.4)') rank, finish - start else end if end do On 2/3/2022 9:39 AM, [email protected]<mailto:[email protected]> wrote: Send lustre-discuss mailing list submissions to [email protected]<mailto:[email protected]> To subscribe or unsubscribe via the World Wide Web, visit http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org or, via email, send a message with subject or body 'help' to [email protected]<mailto:[email protected]> You can reach the person managing the list at [email protected]<mailto:[email protected]> When replying, please edit your Subject line so it is more specific than "Re: Contents of lustre-discuss digest..." Today's Topics: 1. RE-Fortran IO problem (Bertini, Denis Dr.) 2. Re: RE-Fortran IO problem (Patrick Farrell) 3. Re: RE-Fortran IO problem (Bertini, Denis Dr.) ---------------------------------------------------------------------- Message: 1 Date: Thu, 3 Feb 2022 12:43:21 +0000 From: "Bertini, Denis Dr." <[email protected]><mailto:[email protected]> To: "[email protected]"<mailto:[email protected]> <[email protected]><mailto:[email protected]> Subject: [lustre-discuss] RE-Fortran IO problem Message-ID: <[email protected]><mailto:[email protected]> Content-Type: text/plain; charset="iso-8859-1" Hi, Just as an add-on to my previous mail, the problem shows up also with intel fortran and it not specific to gnu fortran compiler. So it seems to be linked to how the fortran IO is handled which seems to be sub-optimal in cas of a Lustre filesystem. I would be grateful if one can confirm/disconfirm that. Here again the access to the code i used for my benchmarks: https://git.gsi.de/hpc/cluster/ci_ompi/-/tree/main/f/src Best, Denis --------- Denis Bertini Abteilung: CIT Ort: SB3 2.265a Tel: +49 6159 71 2240 Fax: +49 6159 71 2986 E-Mail: [email protected]<mailto:[email protected]> GSI Helmholtzzentrum f?r Schwerionenforschung GmbH Planckstra?e 1, 64291 Darmstadt, Germany, www.gsi.de<http://www.gsi.de> Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528 Managing Directors / Gesch?ftsf?hrung: Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, J?rg Blaurock Chairman of the GSI Supervisory Board / Vorsitzender des GSI-Aufsichtsrats: Ministerialdirigent Dr. Volkmar Dietz -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/9f68dae9/attachment-0001.html><http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/9f68dae9/attachment-0001.html> ------------------------------ Message: 2 Date: Thu, 3 Feb 2022 15:15:16 +0000 From: Patrick Farrell <[email protected]><mailto:[email protected]> To: "Bertini, Denis Dr." <[email protected]><mailto:[email protected]>, "[email protected]"<mailto:[email protected]> <[email protected]><mailto:[email protected]> Subject: Re: [lustre-discuss] RE-Fortran IO problem Message-ID: <dm6pr19mb3129db8b22167bbdb55569b3c9...@dm6pr19mb3129.namprd19.prod.outlook.com><mailto:dm6pr19mb3129db8b22167bbdb55569b3c9...@dm6pr19mb3129.namprd19.prod.outlook.com> Content-Type: text/plain; charset="utf-8" Denis, FYI, the git link you provided seems to be non-public - it asks for a GSI login. Fortran is widely used for applications on Lustre, so it's unlikely to be a fortran specific issue. If you're seeing I/O rates drop suddenly during? activity, rather than being reliably low for some particular operation, I would look to the broader Lustre system. It may be suddenly extremely busy or there could be, eg, a temporary network issue - Assuming this is a system belonging to your institution, I'd check with your admins. Regards, Patrick ________________________________ From: lustre-discuss <[email protected]><mailto:[email protected]> on behalf of Bertini, Denis Dr. <[email protected]><mailto:[email protected]> Sent: Thursday, February 3, 2022 6:43 AM To: [email protected]<mailto:[email protected]> <[email protected]><mailto:[email protected]> Subject: [lustre-discuss] RE-Fortran IO problem Hi, Just as an add-on to my previous mail, the problem shows up also with intel fortran and it not specific to gnu fortran compiler. So it seems to be linked to how the fortran IO is handled which seems to be sub-optimal in cas of a Lustre filesystem. I would be grateful if one can confirm/disconfirm that. Here again the access to the code i used for my benchmarks: https://git.gsi.de/hpc/cluster/ci_ompi/-/tree/main/f/src Best, Denis --------- Denis Bertini Abteilung: CIT Ort: SB3 2.265a Tel: +49 6159 71 2240 Fax: +49 6159 71 2986 E-Mail: [email protected]<mailto:[email protected]> GSI Helmholtzzentrum f?r Schwerionenforschung GmbH Planckstra?e 1, 64291 Darmstadt, Germany, www.gsi.de<http://www.gsi.de> Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528 Managing Directors / Gesch?ftsf?hrung: Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, J?rg Blaurock Chairman of the GSI Supervisory Board / Vorsitzender des GSI-Aufsichtsrats: Ministerialdirigent Dr. Volkmar Dietz -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/87d50e4f/attachment-0001.html><http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/87d50e4f/attachment-0001.html> ------------------------------ Message: 3 Date: Thu, 3 Feb 2022 15:38:58 +0000 From: "Bertini, Denis Dr." <[email protected]><mailto:[email protected]> To: Patrick Farrell <[email protected]><mailto:[email protected]>, "[email protected]"<mailto:[email protected]> <[email protected]><mailto:[email protected]> Subject: Re: [lustre-discuss] RE-Fortran IO problem Message-ID: <[email protected]><mailto:[email protected]> Content-Type: text/plain; charset="utf-8" Dear Patrick, Thanks for the quick answer. Sorry for the broken link, i send anyway the code as an tarball in my previous mail I am myself using fortran together with Lustre, using a different application IO than pure fortran i.e MPI-IO or HDF5 and it works just fine. I never used the pure fortran IO with lustre and was surprised by the low performance on our filesystem that users reported. In the tarball i adapted the pure fortran code to use a different application IO (HDF5). In this case i see no preformance problem. Could it be that i both case, there are data contention problem and/or temporal network issue but the HDF5 IO is just more resilient than fortran IO to these issue. Anyway how to check properly for these Lustre system problem ( data contention/network)? Regards, Denis ________________________________ From: Patrick Farrell <[email protected]><mailto:[email protected]> Sent: Thursday, February 3, 2022 4:15:16 PM To: Bertini, Denis Dr.; [email protected]<mailto:[email protected]> Subject: Re: RE-Fortran IO problem Denis, FYI, the git link you provided seems to be non-public - it asks for a GSI login. Fortran is widely used for applications on Lustre, so it's unlikely to be a fortran specific issue. If you're seeing I/O rates drop suddenly during? activity, rather than being reliably low for some particular operation, I would look to the broader Lustre system. It may be suddenly extremely busy or there could be, eg, a temporary network issue - Assuming this is a system belonging to your institution, I'd check with your admins. Regards, Patrick ________________________________ From: lustre-discuss <[email protected]><mailto:[email protected]> on behalf of Bertini, Denis Dr. <[email protected]><mailto:[email protected]> Sent: Thursday, February 3, 2022 6:43 AM To: [email protected]<mailto:[email protected]> <[email protected]><mailto:[email protected]> Subject: [lustre-discuss] RE-Fortran IO problem Hi, Just as an add-on to my previous mail, the problem shows up also with intel fortran and it not specific to gnu fortran compiler. So it seems to be linked to how the fortran IO is handled which seems to be sub-optimal in cas of a Lustre filesystem. I would be grateful if one can confirm/disconfirm that. Here again the access to the code i used for my benchmarks: https://git.gsi.de/hpc/cluster/ci_ompi/-/tree/main/f/src Best, Denis --------- Denis Bertini Abteilung: CIT Ort: SB3 2.265a Tel: +49 6159 71 2240 Fax: +49 6159 71 2986 E-Mail: [email protected]<mailto:[email protected]> GSI Helmholtzzentrum f?r Schwerionenforschung GmbH Planckstra?e 1, 64291 Darmstadt, Germany, www.gsi.de<http://www.gsi.de> Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528 Managing Directors / Gesch?ftsf?hrung: Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, J?rg Blaurock Chairman of the GSI Supervisory Board / Vorsitzender des GSI-Aufsichtsrats: Ministerialdirigent Dr. Volkmar Dietz -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/10fb4b81/attachment.html><http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20220203/10fb4b81/attachment.html> ------------------------------ Subject: Digest Footer _______________________________________________ lustre-discuss mailing list [email protected]<mailto:[email protected]> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org ------------------------------ End of lustre-discuss Digest, Vol 191, Issue 2 **********************************************
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
