As I pointed out you can add code similar to the id project to also get the P 
id and use that as a sync.Map key. 

> On Jul 24, 2019, at 12:36 PM, Tamás Gulácsi <tgulacs...@gmail.com> wrote:
> 
> 2019. július 24., szerda 18:05:56 UTC+2 időpontban Zihan Yang a következőt 
> írta:
>> 
>> I should have said that my evaluation is just self-written cycle 
>> measurement, which is very rough and lack of repeated experiment. So the 
>> factor number might be different in your case. But the contention for the 
>> single sync.Mutex really hinders the performance in my case.
>> 
>> > "what evidence is there a thread-local cache would?"
>> 
>> Strictly speaking, I'm not looking for thread-local, but P-local (similar to 
>> per-cpu data in linux). Since a P can contain only one goroutine at any 
>> time, the P-local storage needs not locking. Only when I flush the batched 
>> items into the global lru cache, the exclusive locking is needed.
>> 
>> Now that P-local storage does not exist, Im thinking about reducing the 
>> operations on the list. Not every operation needs to delete-and-reinsert 
>> element from the list, but just changes some attributes of the item itself. 
>> That is a compromise of LRU strategy, but in this way, we only need read 
>> lock if we operate on the item itself instead of the list.
>> 
>> I am not an expert in caching and golang, so please correct me if I 
>> misunderstand anything.
>> 
>> Jesper Louis Andersen <jesper.lo...@gmail.com> 于2019年7月24日周三 下午10:44写道:
>>>> On Wed, Jul 24, 2019 at 7:16 AM Zihan Yang <whois.z...@gmail.com> wrote:
>>> 
>>>> 
>>>> I tried to evaluate the performance with lru & without lru, the 
>>>> performance with a global lru is 10x slower than without lru at all.
>>>> 
>>> 
>>> This would make me step back a bit and revisit my initial plan. If a global 
>>> cache isn't helping, what evidence is there a thread-local cache would? 
>>> Said another way, the work you are caching must have a certain 
>>> computational overhead to it, which warrants a cache in the first place. 
>>> 
> 
> 
> @dgryski has several different cache eviction policies implemented - 
> https://github.com/dgryski/go-tinylfu and 
> https://godoc.org/github.com/dgryski/go-clockpro for example.
> https://github.com/golang/groupcache has an alru subpackage, too. 
> Also, groupcache may help as a cache server, too.
> 
> What is the lifecycle of a client application?
> One goroutine per app? Then you could create a cache for the goroutine, 
> mmaping a common seed, for example.
> One goroutine per syscall? Then have NumCPU caching goroutines and 
> communicate with them through channels.
> Contention is making you slower then using the host OS' block cache? Then 
> it's doing it's job, and you don't need a cache.
> 
> Tamás Gulácsi
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/a7669373-cc10-436c-bf17-78ea13a7b8e2%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/419A5384-A881-4531-AB34-6D6CB9989A6C%40ix.netcom.com.

Reply via email to