Many apologies if this has been discussed at length in a place that I've missed...
I'm a bit baffled by the continuing strong focus on memory utilization of scripts rather than CPU load on the host servers. If (maybe I'm missing an important issue here) the issue is to avoid a resident or scripted item from causing performance problems on a region, wouldn't the relative CPU load imposed by that script be a critical item? I understand that if the total active memory size for a server goes above it's physical available RAM, then paging would increase and potentially create issues. Is there some objective analysis of servers with the Second Life simulator code on to show that they go into continuous swap mode in this case, or is it occasional "blips" of performance degradation on a slower interval? It seems to me that having continuing excessive CPU load would generate an on-going low simulator frame rate, which would be more frustrating than occasional hits from swapping. This line of thinking makes me wonder if a better metric for managing the user's perception of performance would be script CPU load rather than memory size. Thanks in advance, and again if this has already been addressed please feel free to point me at the thread so that I can read up. Best regards, Joel
_______________________________________________ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/OpenSource-Dev Please read the policies before posting to keep unmoderated posting privileges