(combined responses)

> No, of course no. In this context we are talking of kernel/system
> implementation of select()/read() and you mix this with SSL.

        Because it demonstrates precisely the problem. The 'select' function 
has no
way to know what type of read function will follow, and there are several
with different blocking semantics. A plain 'read' does not block the same
way as 'recvmsg(MSG_WAITALL)'.

> Can you focus and give precision answer to this question
> without involving "hypothetical operation/situation", without mixing
> with some "protocol data", unnamed sophisticated systems and ZONE51 ?

        When you code to standards, you do not make assumptions based on what 
you
see in current implementations or what you think will continue to happen.
You code based on the actual guarantees that you in fact have. The standard
provided a perfect way to get the behavior needed.

        The current case shows the problem. The 'select' function has no way to
know what type of read function you have in mind.

        This has bitten real code many times now. There was the Linux UDP 
recvmsg
issue. There was the Solaris listening socket accept issue. Now there is the
got protocol data not application data issue.

        This faulty assumption has bitten real code in at least three different
ways now. It's time to put it to rest.

--

>>      Do you know of any implementation where this is the case? For example, I
>> call 'read'. There is no data, but a TCP window advertisement is sent.
Work
>> has been done, should 'read' return? If so, *what* should it return?
>> EWOULDBLOCK?!

>Euh.  Now you are confusing Kernel level background processing with the
>application processing.  You do not need to call read() to make the
>kernel process an ACK packet which reduces TCP window advertisement.
>This is done concurrently inside the kernel without any application
>assistance.

        What standard says an implementation cannot check to see if it was 
about to
send an ACK when you called 'read' and decide to send it a few microseconds
early since it's looking at the connection anyway?

        You keep stating things that are not guaranteed as if you can 
synthesize a
guarantee from them. No combination of unguaranteed assertions that are true
of some particular platform will give a guarantee.

>In short you have yet to find a case where the application layer could
>not forsee or incited the select/poll/read/write event model to stop
>working in the way you claim it can.

        Then how did this conversation start in the first place? How did the 
Linux
recvmsg UDP problems happens?

        Real code breaks because people make assumptions based on how one kernel
works or that they've never seen a system do otherwise. That's really fine
when you have no other choice, say due to missing standards or inadequate
documentation. It sucks, but if there's no other choice, you live with it.

        But here, you have pearls. The standard gives you a simple, guaranteed 
way
to get precisely the behavior you want. You spit on it and instead
synthesize a method that will work if and only if a large combination of
platform-specific things remain true. You persist even though this exact
same type of assumption has broken applications before and is breaking one
now.

        I don't get it.

        DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to