On Tue, Mar 29, 2016 at 12:07:59PM -0600, Eric Blake wrote: > On 03/29/2016 12:03 PM, Wouter Verhelst wrote: > > On Tue, Mar 29, 2016 at 11:45:45AM -0600, Eric Blake wrote: > >> Supporting DF merely transfers the burden of collection between server > >> and client. I suspect that there are cases where the server does NOT > >> want to support DF (because it would require the server to allocate > >> memory to collect the data before sending a single structured read > >> reply), > > > > There are other ways to handle that; e.g., the server could have a > > "request too large for non-fragmented read" error message. The spec > > should give a minimum size that the server MUST support (which should be > > reasonably large), and should state that a server MAY reply to any > > request with DF set for a block larger than that minimum, with that > > error. > > How does 64k sound?
Dunno. It might make sense for this number to be based upon some "standard" minimum request size in things like ATA or SCSI if such a number exists there, but I don't know enough about either standard to answer that question myself. If such a number doesn't exist (or nobody who knows speaks up soon enough), 64k is certainly good enough, I suppose. -- < ron> I mean, the main *practical* problem with C++, is there's like a dozen people in the world who think they really understand all of its rules, and pretty much all of them are just lying to themselves too. -- #debian-devel, OFTC, 2016-02-12