David S. Miller wrote:
From: Chase Douglas <[EMAIL PROTECTED]>
Date: Mon, 30 Jan 2006 13:55:21 -0500
I have a question about the implementation of sendfile. In my current
configuration of a server, when requests are made for a file, a new
thread is spawned and sendfile is called to complete the request (We
realize that spawning a thread for every request is a bad way to do
this, but that's just our current situation). If I am serving many
requests concurrently, and I try to send about 3 MB of each request at a
time through sendfile, and assuming that the speed of each connection is
sufficient, would each sendfile send all 3 MB at once, or would each
call try to send maybe 512 KB, then another 512 KB, then another 512 KB
and so on? If it is the latter, then would it be that each thread might
be switched after only 512 KB of data is sent from a file?
We'll queue enough to fill the socket send buffer, then block the
thread. As ACK's come back to make space again, the thread wakes
up and queues up more sendfile() pages until the socket send buffer
limit is reached again.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
If I call sendfile on a 3 MB file, sendfile would attempt to put part of
that into the socket buffer until it is full. Is there any caching that
takes place so that when the socket buffer is emptied and needs to be
filled again, that the data does not have to be read from the hard disk
again, incurring some loss of time for the disk to seek to the data?
Thank you
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html