My feeling is a per-P bump-pointer allocation space can fit into the 
current go's GMP model for faster allocation than the current allocation 
path with span calculation and free slot search.

But anyway the fragmentation is a big pain. I'm not sure the non-moving 
property of go GC is the design motivation or the result of 
non-generation/compaction. Read barrier and moving is still expensive in 
Java, like in Shenandoah. The throughput number is still fairly low.

On Tuesday, May 16, 2017 at 9:58:13 PM UTC-7, Ian Lance Taylor wrote:
>
> On Tue, May 16, 2017 at 8:27 PM,  <leven...@gmail.com <javascript:>> 
> wrote: 
> > 
> > It's not clear why when you use "a set of per-thread caches" you "lose 
> advantages of bump allocator". At any point of time, a single goroutine is 
> executed on a thread. The points when a goroutine gains and loses the 
> execution context of a thread, and when it is transferred from one thread 
> to another are known to runtime. At those points a goroutine could cache 
> (eg in a register) the current thread's bump allocation address and use it 
> for very fast bump allocation during execution. 
>
> Fair enough, although it's considerably more complicated, as you have 
> to allocate a chunk of address space for each thread, you have to 
> replenish those chunks, you go back to worrying about fragmentation, 
> etc. 
>
> Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to