> I've thought about this problem before, in the context of a TCP
> sender, where the best solution is both (a) hard and (b) significantly
> different. I had not thought about it in the case of UDP, but yes,
> that could be a significant issue, particularly since UDP packets on
> the receive queu
< said:
> on a similar subject (UDP sockets), i notice that
> socket buffers do not have a pointer to the end of
> the queued mbufs, so sbappend*() routines have to scan the
> list of queued bufs. As you can imagine this is
> causing some nasty effect when a receiver is slow.
I've thought about
< said:
> A starting point, increment, and ceiling
NMBCLUSTERS *is* the ceiling. No memory is actually allocated
(although virtual address space is) until those clusters are actually
requested.
> based on the memory size of the system
That would be an improvement, but recall that many of thes
Garret,
on a similar subject (UDP sockets), i notice that
socket buffers do not have a pointer to the end of
the queued mbufs, so sbappend*() routines have to scan the
list of queued bufs. As you can imagine this is
causing some nasty effect when a receiver is slow.
Is it worthhwile to fix this
< said:
> ENOBUFS == ESYSADMINNEEDSTORAISENMBCLUSTERS
BT! Wrong, but thanks for playing.
ENOBUFS is returned in many more circumstances than simply ``out of
mbufs''.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
> Since this is UDP, I'm not sure much should be done, perhaps
> just document the return value, but honestly since it's _U_DP
exactly -- documenting is the only thing we can do.
There are far too many apps that might break if we
change this behaviour.
Ideally one could add a setsockopt to implem
* Luigi Rizzo <[EMAIL PROTECTED]> [010207 09:57] wrote:
>
> not really. The problem is not running out of mbufs, is that the
> interface queue (usually limited to net.inet.ip.intr_queue_maxlen)
> fills up, and this has nothing to do with NMBCLUSTERS. This used
> not to be a problem in the past pr
Luigi Rizzo wrote:
>
> Hi,
>
> just occurred to me that there exists the following feature of
> send/sendmsg and probably also write on UDP sockets, and it would
> be worth documenting.
>
> When you attempt to send() to an udp socket, the socket buffer
> (which has no function other than boundi
> > ENOBUFS == ESYSADMINNEEDSTORAISENMBCLUSTERS
>
> Or perhaps ENOBUFS == E_SYSTEM_NEEDS_TO_RAISE_NMBCLUSTERS_ALL_ON_ITS_OWN?
it is not an NMBCLUSTERS problem, it is just the device queue
which is filling up, and this is a perfectly normal and desired
behaviour. One would just want that to be ha
Alfred Perlstein wrote:
>
> * Luigi Rizzo <[EMAIL PROTECTED]> [010207 09:14] wrote:
> > Hi,
> >
> > just occurred to me that there exists the following feature of
> > send/sendmsg and probably also write on UDP sockets, and it would
> > be worth documenting.
>
> Yes it is.
>
> [snip]
> > When y
> > When you attempt to send() to an udp socket, the socket buffer
> > (which has no function other than bounding the max message size
> > for UDP sockets) is just bypassed, and the low-level routine gets
> > called. The latter (typically ip_output() or ether_output()) can
> > return an ENOBUFS me
* Luigi Rizzo <[EMAIL PROTECTED]> [010207 09:14] wrote:
> Hi,
>
> just occurred to me that there exists the following feature of
> send/sendmsg and probably also write on UDP sockets, and it would
> be worth documenting.
Yes it is.
[snip]
> When you attempt to send() to an udp socket, the socke
Hi,
just occurred to me that there exists the following feature of
send/sendmsg and probably also write on UDP sockets, and it would
be worth documenting.
When you attempt to send() to an udp socket, the socket buffer
(which has no function other than bounding the max message size
for UDP socket
13 matches
Mail list logo