On Fri, 31 Aug 2012, Michael C Tiernan wrote:

That's a very good explanation (I think I actually understand it better than I did before.) Thank you for that.

However, now I'm going to switch back to our primary discussion and ask, how do I, the system administrator, who is charged with the operation and performance of the system(s) utilize this new knowledge when I have *no control* or design/advisory input over the applications being

Hi Michael. I remember reading many years ago that there is a mathematic proof that you cannot guarantee you will not hit OOM unless you fully understand the memory usage of every process running on a system under all conditions. I just did a quick google to find this (been meaning to for ages, anyway) but in any case, it seems to make sense.

executed on my systems (nor the ability to educate every person who writes a bit of code that gets run on my systems) to control the need for swap?

There are things you can do to minimise problems though.

(1) Limit swap

The ratio of ram to swap is not what it was. 15 or 20 years ago 1:2 was often a reasonable ram:swap ratio even on systems that didn't specifically require it. Now it would generally be a bad idea. This is because improvements in memory and storage capacity have in general outpaced improvements in transfer rates. As a result swapping 50% of system memory back in today would take longer than it would in the past.

I've seen a lot of good comments in this thread but I'll drop in my standard comment on the use of swap space: http://www.practicalsysadmin.com/wiki/index.php/Swap

I wrote this so that I wouldn't have to keep rewriting this every time to topic came up on a list :)

(2) Using hard resource limit systems available within the OS.

Within Linux this is 'Cgroups'. You can use use Cgroups to put hard resource limits on components of the system.

This won't solve all problems of course. Eg, what if the app gobbling all memory is the primary app on the system (such as the database application on a database server) you generally wouldn't want it hitting arbitrary resource limits significantly less than the system could offer.

I realize it sounds like I'm being dense or difficult and I apologize for it. While your explanation is very concise about how we *should* (http://tinyurl.com/4xr3574) write code to properly utilize/maximize the system's resources, the idea of using "memory mapping" does nothing

I'd say that is only true if you include developer/sysadmin time as a system resource. I would argue human time should always be considered but I think it is often forgotten.

Spending a lot of extra human time to slightly optimise codee may be an example of 'misdirected optimisation'. Or it may not - it just depends.

I could keep waffling on these topics for some time but won't :)

Digressing...

I often find discussions in LOPSA, LISA (SAGE), and SAGE-AU informed and interesting and I'm happy I continue to engage with the community. I've been reflecting lately on the exploding number of people in our profession and how many sysadmins/architects don't seem to engage at all with the community these days. I can only conclude that they:

(a) Know everything

or

(b) Are doomed to repeat the mistakes of the past

Cheers,

Rob

--
Email: rob...@timetraveller.org         Linux counter ID #16440
IRC: Solver (OFTC & Freenode)
Web: http://www.practicalsysadmin.com
Director, Software in the Public Interest (http://spi-inc.org/)
"Information is a gas"
_______________________________________________
Tech mailing list
Tech@lists.lopsa.org
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/

Reply via email to