Thank you Rick and Jason for the pointers

On Saturday, December 11, 2021 at 2:12:55 AM UTC+5:30 Rick wrote:

> Don't forget to think about cache coherency. Caching is more involved with 
> multiple caching microservices talking to the database. Creates and updates 
> require notification of all replicas to refresh their caches.
>
> On Thursday, 9 December 2021 at 23:08:16 UTC-8 Jason E. Aten wrote:
>
>> You might prefer a fixed size cache rather than a TTL, so that your 
>> cache's memory size never gets too big.  This is also simpler logic, using 
>> just a map to provide the cache, and a slice to constrain the cache size.  
>> It's almost too short to be a library, see below. Off the top of my head -- 
>> not run, but you'll get the idea:
>>
>> type Payload struct {
>>    Key string  // typical, but your key doesn't have to be a string. Any 
>> suitable map key will work. 
>>    pos int // where we are in Cache.Order
>>    Val  string // change type from string to store your data. You can add 
>> multiple elements after Val if you desire.
>> }
>>
>> type Cache struct {
>>   Map map[string]*Payload
>>   Order []*Payload
>>   MaxSize int
>> }
>>
>> func NewCache(maxSize int) *Cache {
>>     return &Cache{
>>         Map: make(map[string]*Payload),
>>         MaxSize: maxSize,
>>     }
>> }
>>
>> func (c *Cache) Get(key string) *Payload {
>>     return c.Map[key]
>> }
>>
>> func (c *Cache) Set(p *Payload) {
>>      v, already := c.Map[p.Key]
>>      if already {
>>            // update logic, may not be needed if key -> value mapping is 
>> immutable
>>           //  remove any old payload under this same key
>>            c.Order = append(c.Order[:v.pos], c.Order[v.pos+1:]) 
>>      }
>>      // add the new
>>      p.pos = len(c.Order)
>>      c.Order = append(c.Order, p)
>>      c.Map[p.Key] = p
>>
>>      // keep the cache size constant
>>     if len(c.Order) > c.MaxSize {
>>            // deleted the oldest
>>            kill := c.Order[0]
>>            delete(c.Map, kill.Key)
>>            c.Order = c.Order[1:]
>>     }
>> }
>>
>> If you really need a Time To Live and want to allow memory to balloon 
>> uncontrolled, then MaxSize would change from an int to a time.Time, and the 
>> deletion condition would change from being size based to being 
>> time.Time.Since() based.
>>
>> Also look at sync.Pool if you need goroutine safety. Obviously you can 
>> just add a sync.Mutex to Cache and lock during Set/Get, but for heavy 
>> contention sync.Pool can perform better.
>>
>>
>> On Thursday, December 9, 2021 at 9:59:32 AM UTC-6 Rakesh K R wrote:
>>
>>> Hi,
>>> In my application I have this necessity of looking into DBs to get the 
>>> data(read intensive application) so I am planning to store these data 
>>> in-memory with some ttl for expiry.
>>> Can someone suggest some in-memory caching libraries with better 
>>> performance available to suit my requirement?
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/20e0fd3a-ae54-4f77-8d0e-c2d3c27b7a7dn%40googlegroups.com.

Reply via email to