----- Original Message -----
>
>
> I was trying to get a better understanding of the caching algorithm
> and what items it would
> evict when the cache was full and how long it would take to fill the
> cache. Hence the small cache size, faster to fill.
>
> I wasn't going to report the issue, but a segmentation fault is
> probably something that should be looked at as you pointed out.
>
> Anyway, as pointed out in previous threads, the cache is a circular
> buffer and the least recently used items get evicted
> first, as long as the item is cacheable the headers do not affect how
> long it remains in the cache, the LRU determines this.
>
> As for how long it takes to fill the cache, if you subtract 65M from
> the value specified for the cache size
> in the storage config. If I set the cache size to 128M in the storage
> config, then the cache is 63M, according to the debug statements in
> the traffic.out
> and in practice this seems to be the case. Is this 65M for the ram
> cache or the http header store else, or something else, I don't
> know.

I would say that the cache size scales much better with
the storage size than 1:1.

RAM cache is, as the name suggests, stored in RAM. But
the RAM is also used to map the directory of the storage.

> If the the compression is not turned on then it is roughly a one to
> one mapping between the
> http object sizes and what the cache stores, there is a little
> overheard per http item though, is this attributed
> to the cache key and that http objects are stored as fragments?

There's a lot of loss here depending on varying parameters:
* object size
* compression efficiency
* header variation
* File System overhead
* etc

But please don't quote me on this, as my understanding
of the cache is incomplete, at best.
John should have a more sound explanation, and once
he delivers it, I shall copy/paste it into the docs ;)

> Thanks,
>
> Kevin.

i

--
Igor Galić

Tel: +43 (0) 664 886 22 883
Mail: i.ga...@brainsware.org
URL: http://brainsware.org/

Reply via email to