"Roberto Reale" <[EMAIL PROTECTED]> writes: > E.g., some servers very often panic or report ENOMEM when carrying > out a request would cause them to exceed their allocable storage. > Much better solutions can be devised: a server, for example, might > keep a small reserve of memory (as Emacs does),
What emacs does is appropriate for interactive applications. But I don't think it's very appropriate for servers. > A still better idea would be, to design some sort of ``cooperative > protocol'', whereby to allow trusted servers to borrow and lend each > other memory pages, according to the amount of work they are charged > with. I don't think I that solves the problem. Either you reserve memory for the "trusted servers" so that other processes can't couse them to run out of memory, or you don't. In either case, the cooperative protocol won't make a significant difference. (Most times when you run out of virtual memory, there's some process running amok and allocating all memory it can, and it will quickly allocate anything that a cooperative processe gives back to the system). I think a better approach is to realize that most servers should never allocate memory spontaneously, only as part of the processing of some client request. When it runs out of memory, it should abort that particular request and return an error to the client, and perhaps write a log message, but otherwise just go on as if nothing happened. Ideally, the *client* should allocate the needed memory and then lend it to the server for the duration of the request (or for requests like open which creates a new server object and returns a handle to it, for the life time of the handle). But that's not going to happen soon, at least not before the L4 port is running. /Niels _______________________________________________ Bug-hurd mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-hurd