On Sat, Mar 06, 2010 at 09:51:49PM +0100, Marine Kelley wrote:
> This is exactly how I had interpreted it, and this means that a script has to
> explicitely request less memory than the default 64k if the scripter wants to
> use less memory. And I don't think there will be any other way to do that than
> by calling a LSL function to request memory. Which means modifying existing
> scripts. This is unacceptable for all well established business owners who 
> made
> many different script that are now spread across SL. To me, a script should
> take as many bytes as it needs, not more, and that amount of memory should 
> vary
> with time. Otherwise it is not practicable, and will break content once the
> limits are in place.

Watch them do it; you don't really think there is a Linden that can write
a malloc library (for scripts), right?

Willing to donate his librmalloc code, that is EXTREMELY efficient with memory,
Carlo Wood


PS Here is an old post that I digged up, about a test that I did with rmalloc:

  Here is the result of a stress test program which allocates 1000000 random
  sized blocks, freeing and allocating at random so on average about 5000
  blocks are allocated at the same moment.

  gnu malloc:

  program output:
  max_heap_size = 8499200
  average heap size = 8372077; average allocated 5143466
  time 37.715371 s

  my malloc (called 'rmalloc'):

  program output:
  max_heap_size = 6220752
  average heap size = 6135204; average allocated 5143466
  time 35.703490 s

Thus, gmalloc had on average 8372077 - 5143466 = 3228611 bytes overhead (62%)
while rmalloc had on average 6135204 - 5143466 =  991738 bytes overhead (19%).

_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to