I have an application where I will be allocating millions of data 
structures, all of the same size. My program will need to run continuously 
and be pretty responsive to 
its network peers.

The data is fairly static, once allocated it will rarely need to be 
modified or deleted.

In order to minimize the garbage collection scanning overhead, I was 
thinking of allocating large blocks on the heap that were a fixed size that 
would hold 20K or so elements
and then write a simple allocator to hand out pieces of those blocks when 
needed. Instead of having to scan millions of items on the heap, the GC 
would only be scanning 100 or so
items.

Sound reasonable?  Or does this 'go' against the golang way of doing things?

F

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/0be7d132-71d6-4ff9-a8eb-ca09a94fafeao%40googlegroups.com.

Reply via email to