But if the user apps are arbitrary and random how will a cache be of any 
benefit. It is only a benefit if the programs are reading the same files and in 
that case you could use a logical file manager that keeps common files in 
memory. 

Or as another pointed out, use a service worker model and partition the workers 
based on files accessed (if known ahead of time) and use a local cache. 

Lastly, use CGo or ASM. You can get access to the P struct and G Id easily. 

> On Jul 24, 2019, at 7:41 AM, Zihan Yang <whois.zihan.y...@gmail.com> wrote:
> 
> As I said before, my use case is a little complicated, let me explain it.
> 
> I am trying to add more cache inside a project called [gVisor][1], which is 
> an open source user-space kernel from google. Simply speaking, it is a secure 
> container sandbox runtime that leverages hardware-assisted 
> virtualization(VT-x for Intel). Sentry, the core part of gVisor, could run in 
> GR0 or HR3 (it usually runs in GR0, unless it encounters privileged 
> instructions that cannot be executed in non-root mode).
> 
> For IO operations, most of times, sentry will get trapped down to HR3 and let 
> the host kernel (e.g., linux) deal with the OS cache, exactly what you said. 
> However, due to the complex architecture and sooooo many layers of 
> abstractions in gVisor, the performance is bad. Therefore, we would like to 
> allow more page cache in gVisor to avoid trapping to HR3 in each IO 
> operations.
> 
> Since we are adding cache in sentry, there must be a way to manage and 
> reclaim them. LRU is the most intuitive strategy, but I find that contention 
> overhead completely diminishes the performance improvement brought by cache. 
> So I would like a way to avoid contentions on the same lru list.
> 
> [1]: https://github.com/google/gvisor
> 
> 
> Robert Engels <reng...@ix.netcom.com> 于2019年7月24日周三 下午7:52写道:
>> The OS file cache is almost certainly going to better than rolling your own. 
>> You can probably still find the studies that the Lucene team did in this 
>> regard. 
>> 
>>> On Jul 24, 2019, at 12:16 AM, Zihan Yang <whois.zihan.y...@gmail.com> wrote:
>>> 
>>> My use case is a little bit complicated. The goroutine are running some 
>>> user-defined applications, and they might concurrently access the same os 
>>> file. Also, I cannot limit the number of goroutines because I cannot 
>>> control the user code.
>>> 
>>> I tried to evaluate the performance with lru & without lru, the performance 
>>> with a global lru is 10x slower than without lru at all.
>>> 
>>> So the contention of the global lru list is indeed a trouble at least to me.
>>> 
>>> 在 2019年7月24日星期三 UTC+8上午11:20:38,Bakul Shah写道:
>>>> 
>>>> Instead of starting new goroutines, send requests to existing goroutines 
>>>> via a channel.
>>>> In effect you are simulating an N core processor with per code local 
>>>> caches and one shared cache so you can look at how processors manage cache.
>>>> Just create N goroutines at the start and profile to see what happens. If 
>>>> necessary you can adjust the number of goroutines as per load.
>>>> 
>>>> Note the cost of managing LRU. If there is no clear pattern of accessing 
>>>> similar items, you may be better off using random replacement. As an 
>>>> example you can use LRU for smaller local caches and random for the global 
>>>> cache.
>>>> 
>>>> But if I were you I'd spend time on understanding access patterns before 
>>>> anything else. If I have no idea on the pattern, I'd pick the simplest 
>>>> implementation and evolve it based on profiling. In other words do not 
>>>> assume that "greatly reduce the contention for global lru list" is the 
>>>> right thing to do *unless* profiling tells you so.
>>>> 
>>>>> On Jul 23, 2019, at 7:49 PM, Zihan Yang <whois.z...@gmail.com> wrote:
>>>>> 
>>>>> Thanks for the example code, this is a possible solution. But my 
>>>>> goroutine is not long-running. More specifically, each goroutine performs 
>>>>> some IO, then returns. Next time, there might be another goroutine 
>>>>> accessing the same file.
>>>>> 
>>>>> One workaround is to return the local storage back to CacheManager, but 
>>>>> is involves contention on CacheManager, and adds the complexity of 
>>>>> CacheManager.
>>>>> 
>>>>> 在 2019年7月24日星期三 UTC+8上午1:48:40,Michael Jones写道:
>>>>>> 
>>>>>> The simple, common way--if I understand you need correctly--is to launch 
>>>>>> a method goroutine.
>>>>>> 
>>>>>> type CacheManager struct {
>>>>>> // things a worker needs to know, such as the global cache, the specific 
>>>>>> worker's local cache, etc.
>>>>>> }
>>>>>> 
>>>>>> func master() {
>>>>>>   for i := 0; i < workers; i++ {
>>>>>>   m := new(CacheManager)
>>>>>>   m.x = y // set up your thread local storage
>>>>>>     :
>>>>>>   go m.Worker()
>>>>>>   }
>>>>>> }
>>>>>> 
>>>>>> Unfortunately this does not seem to be in any intro guides, which pushes 
>>>>>> people to complicated workarounds.
>>>>>> 
>>>>>>> On Tue, Jul 23, 2019 at 10:22 AM Zihan Yang <whois.z...@gmail.com> 
>>>>>>> wrote:
>>>>>>> I am trying to implement an LRU cache. Several global lru lists could 
>>>>>>> be accessed concurrently by multiple goroutines, which could be a 
>>>>>>> disaster in a machine with 24 or more cores.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Therefore, it would be great if I can add the item to P-local storage 
>>>>>>> and flush the batched item into the lru list as a whole. This should 
>>>>>>> greatly reduce the contention for the global lru list.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> How can I do it? I saw some related github issues, #8281 and #21355, 
>>>>>>> which leads me to a project called gls, but the code seems too much to 
>>>>>>> integrate into my project (actually I'd better not include any 
>>>>>>> third-party package to avoid potential law issues). Is there any 
>>>>>>> built-in way to achieve this?
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "golang-nuts" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>>> an email to golan...@googlegroups.com.
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/golang-nuts/c79d3801-2f03-43fd-8dd8-35904b481341%40googlegroups.com.
>>>>>> 
>>>>>> 
>>>>>> -- 
>>>>>> Michael T. Jones
>>>>>> michae...@gmail.com
>>>>> 
>>>>> 
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "golang-nuts" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email to golan...@googlegroups.com.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/golang-nuts/9dbe56c7-d0cd-4301-bc37-0560bcd684c4%40googlegroups.com.
>>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to golang-nuts+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/e3a12b07-21b2-400e-ab2a-5ee2a07b23eb%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/C6A9CE9E-B871-4188-83D1-9A244A24C36B%40ix.netcom.com.

Reply via email to