> -----Original Message----- > From: Paul Eggert [mailto:[EMAIL PROTECTED] > "Schwarz, Konrad" <[EMAIL PROTECTED]> writes: > > > The argument that since ptrdiff_t is intended to represent the > > difference of pointers makes it somehow unsuitable to represent the > > result of read()/write() is missing the point: on any useful ABI, > > ptrdiff_t can represent the size of any object, can > represent 0, and > > can represent -1, which is what read() and cousins require. > > That's true nowadays, but in the old days there may have been > a point to distinguishing between ptrdiff_t and ssize_t. I > wasn't there, but I imagine that the original point of > ptrdiff_t was that it could be wider than size_t. Consider > the case of a machine with 16-bit addresses, 32-bit > registers, and where objects can be larger than > 2**15 bytes. (The Motorola 68000 in 16-bit address mode comes to > mind.) In such a machine size_t (and therefore ssize_t) is > 16 bits, and but it's useful to make ptrdiff_t 32 bits. > > These days the distinction isn't all that relevant, because > it's not so useful to have objects that are between 2**31 and > 2**32 bytes in size. So it would make sense now to require > that ssize_t and ptrdiff_t must be the same width. I don't > know of any modern counterexamples. >
Sorry, you seem to have missed my point: even in the "old days", i.e., in cases where sizeof (ptrdiff_t) > sizeof (size_t), it makes sense to define the return value of read() to have type ptrdiff_t, since this ensures that all objects (that sizeof is valid for) can be read in a single operation. Otherwise, technically, a check that the size of the object is less than SSIZE_MAX bytes is required. What is the justification for making the return value of read() have type ssize_t? Regards, Konrad