On 10/05/2017 06:30 AM, Vladimir Sementsov-Ogievskiy wrote: > 21.09.2017 15:18, Vladimir Sementsov-Ogievskiy wrote: >> Hi all! >> >> I'm about this: >> >> "A server SHOULD try to minimize the number of chunks sent in a reply, >> but MUST NOT mark a chunk as final if there is still a possibility of >> detecting an error before transmission of that chunk completes" >> >> What do we mean by "possibility"? Formally, such possibility exists >> always, so, we'll never mark a chunk as final. >> > > One more question: > > for |NBD_REPLY_TYPE_ERROR and ||NBD_REPLY_TYPE_ERROR_OFFSET, why do we > need message_length field? why not to calc it as chunk.lenght - 4 for > ||NBD_REPLY_TYPE_ERROR and chunk.lenght - 12 for > ||NBD_REPLY_TYPE_ERROR_OFFSET?
For consistency. If _all_ NBD_REPLY_TYPE_ERROR* message have a message_length field, then it is easier to write a generic handler that knows how to deal with an unknown error, no matter what command the error is sent in response to. Ideally, a server should never send an error message that the client is not expecting, but having a robust protocol that lets clients deal with bad servers is worth the redundancy caused by being consistent, and we are more likely to add additional error modes to existing commands than we are to add more success modes. > > For example, with NBD_REPLY_TYPE_OFFSET_DATA variable data length is > calculated, not specified separately. That's because non-error types don't have the same consistency concerns; if we want to introduce a new success response, we'll probably introduce it via a new command, rather than as a reply to an existing command. Furthermore, while error replies are likely to have a free-form text error description, success replies tend to not need it. The layout of the error types is designed to make it easy to grab the free-form error message from a known location for display to the user, even if the client has no idea what the rest of the error means, as that may be a useful debugging aid. > > What is the reason for server to send NBD_REPLY_TYPE_ERROR with > message_lenght < chunk.lenght - 4? In all likelihood, all well-written servers will never send garbage bytes (possible only when setting chunk.length larger than message_length + sizeof(documented fields)). But we wrote the spec to be conservative, in case we want to add a later defined field that earlier clients will still gracefully ignore, rather than strict (allowing inequality, instead of requiring exact lengths, lets a client skip over what it considers garbage bytes rather than dropping the connection because a too-new server tried to send useful information in those bytes). -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
signature.asc
Description: OpenPGP digital signature