On Sat, Nov 7, 2009 at 12:47 PM, David Schwartz <dav...@webmaster.com> wrote: > Your logic is backwards here. You are trying to decide whether or not to > read data on the decrypted output link, so why are you 'select'ing on the > encrypted input link? > > SSL is a state machine, not a filter. The implementation of SSL_read is > *NOT*: > 1) Read some data from the socket. > 2) If we got any data, decrypt it. > 3) Return the data we read. > > It is: > 1) Try to make forward progress, doing any reads and writes as necessary. > 2) If this resulted in any decrypted data, return it. > 3) If not, tell the caller why. > > As a result, you can only 'select' *after* calling SSL_read, never before. > And you cannot assume that you will be selecting in the read direction, > because either can be necessary. > > DS
Hi David, The main idea was avoid polling in an infinite loop consuming CPU resources. I wrote that code thinking in: "If the particular client socket is calling our (thread) attention then fetchs the data". I thought on that approach as I don't know another for non-blocking IO without a poll cycle. If I loop forever on the SSL_read() function, CPU will be kept busy on that job so I thought in a way of not having to do so. Instead, something should "inform" that on that socket is data ready to be read. Mmmh...I can't see how to do it without select(). The main important thing here is that this thread is attending only 1 client. Maybe it's confusing because "why use select() then if you are polling always on the same IO socket?". Answer: I don't know if there is another system call to block until a file descriptor is ready to be read. That part of code is threaded, and althought you are right on saying: "why a server should have 1,000 threads when you have 1,000 connections", the particular use of this application will be a very-connection-limited server. For example, saying 20 clients is a huge number of connections. The numbers of threads are limited as the number of connections. Then, if I read first with SSL_read() on non-blocking IO, every time the client isn't writting or sending anything, the server is using and wasting cpu cycles. Without the select() approach and with a maximun of 32 clients my cpu usage went to 200% ( 100 per core). With the select() approach the cpu usage is relative to the clients reading/writting actions. I believe you are more experienced developer than me (in fact, i'm not what you can call A developer) and if not much to ask, how do you solve this kind of problem? (without removing the roots of the multithreaded server design) I mean, how can you block execution waiting for a "noise" on the file descriptor to take some action without using select()? I really appreciatte your concern on letting me know my errors and sorry if this invalidate the main topic thread, Regards -- If you want freedom, compile the source. Get gentoo. Sebastián Treu http://labombiya.com.ar ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org