I've been out for a bit, I'll bundle multiple replies in a single mail. Before I start: thank you to everybody taking the time to respond in this thread :)
How are you determining what you call "consumed memory"?
Memory which isn't available to the system. So "used" minus "buffers/cache"
Keep in mind that the kernel will by default use almost all free memory (not actually used by processes and libraries) as cache space, because it makes no sense to leave memory just laying around. However, once it's really needed, the caches will be dropped. Thus "free" memory is usually reported as low. Compare with "available" memory as reported by free.
Yup. I'm aware of this and it's not the issue I'm trying to solve. Disk cache is good.
The free's 'available' value is computed without taking "SUnreclaim" (unreclaimable kernel slabs) value from /proc/meminfo. The difference is not that great, usually, but for NFS server it can lead to funny things like "I have plenty of free swap and OOM killer was invoked, despite 'available' telling me there's plenty of free RAM". Can happen with Java too, as OP e-mail shows us.
Unfortunately the system I was currently seeing the issues on has apparently been rebooted by the client, so right now I don't have a system to verify, but it sounds like this is what I'm on about. I'll verify as soon as another system with issues pops up.
Happened to me with kernel 4.9.0-5, continued with kernel 4.9.0-6, seems to be solved by upgrading to backported kernel version 4.14.
Hm. This could bite me in the ass. Thanks for your feedback.
The last line from smem sticks out with high usage figures: 566 jetty /usr/lib/jvm/java-8-openjdk 493896 958124 958381 959804
Java is actually consuming the expected amount of RAM for the settings we start it with. Also, the hight memory usage presists after shutting down Jetty (and pretty much any other service), which is why I was hinting at possible kernel issues (not much else was running).