Hi there,

To my understanding, when a user process runs away and asks for too much 
memory, paging/swapping will happen, so either system performance gets a hit ( 
because of memory paging and the IO it caused ), or /tmp gets so shrinked that 
new process can't be started which renders unusable system.

So this is both a performance issue and a security issue to me.

I understand with ulimit, plimit and prctl, virtual memory size, stack size and 
data size can be limited. But I am not aware of away to set a limit system 
wide. ulimit kind of depend on the user's own setting and plimit/prctl is run 
after the fact. So how do we set a cap at system level?

Also with shared memory segment in the picture, how does it interact with the 
limits set above?

So what would be the proper way of dealing with this kind of issue? It would be 
great if it also works for Solaris 9.

Thanks,

Sean
 
 
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to