:> > :> > Quite true. In the embedded world we preallocate memory and shape :> > the programs to what is available in the system. But if we run out :> > of memory we usually panic and reboot - because the code is designed :> > to NOT run out of memory and thus running out of memory is a catastrophic :> > situation. : :*ACK* This is unacceptable in many 'embedded' systems.
Don't confuse a watchdog panic from other conditions. If the embedded system software is supposed to deal with a low-memory condition and can't, the failsafe is all that's left between it and infinity. The statement that the kernel's overcommit methodology somehow prevents one from being able to build embedded systems on top of it is just plain incorrect. The embedded system is perfectly capable of implementing its own memory management to avoid the filesafe provided by the kernel. Most of the embedded work I've done -- mainly remote telemetry units running with flash and a megabyte or so of ram -- panic and reboot if they run out of memory. I have several dozen units in the field each keeping track of several thousand data points on 2 minute intervals which have not ever crashed. The only time we reboot them is when we need to upgrade the OS core. The last time was 4 years ago. *These* units will panic and reboot if they run out of memory because the software is designed not to. It is as simple as that. -Matt To Unsubscribe: send mail to majord...@freebsd.org with "unsubscribe freebsd-hackers" in the body of the message