@Alan: Fragment sizes are not fixed, right? By the formula in section Stripe Directory <https://docs.trafficserver.apache.org/en/latest/developer-guide/cache-architecture/architecture.en.html#stripe-directory>, we have adaptive sizes
( *size* + 1 ) * 2 ^ ( CACHE_BLOCK_SHIFT + 3 * *big* ) On Fri, Jun 2, 2017 at 2:05 AM, John Plevyak <jplev...@acm.org> wrote: > While large objects are not stored contiguously, the chunk size is > configurable (as Alan pointed out). > Increasing the chunk size increases memory usage and decreases the number > of seeks required to > read an object. It does not decease the number of seeks required to write > the object because > we use a write buffer which is separately sized for write aggregation. > > The default chunk size is set such that for spinning media (HDDs) the > amount of time spent reading > the object is dominated by transfer time, meaning that total disk time will > decrease by only > a small amount if the chunk size is increased. Indeed for SSDs, the chunk > size can be > decreased to free up more memory for the RAM cache and to decrease the > number of > different block sizes. > > On Thu, Jun 1, 2017 at 5:46 AM, Alan Carroll < > solidwallofc...@yahoo-inc.com.invalid> wrote: > > > You might try playing with the expected fragment size. That's tunable and > > you can get a partial effect of more contiguous fragments by making it > > larger, although I think the absolute maximum is 16M. This doesn't cost > > additional disk space as it is a maximum fragment size, not a forced one. > > > > > -- *Anh Le (Mr.)* *Senior Software Engineer* *Zalo Technical Dept., Zalo Group, **VNG Corporation* 5th floor, D29 Building, Pham Van Bach Street, Hanoi, Vietnam *M:* (+84) 987 816 461 *E:* anh...@vng.com.vn *W: *www.vng.com.vn <http://www.google.com/url?q=http%3A%2F%2Fwww.vng.com.vn&sa=D&sntz=1&usg=AFQjCNHYo7I_1mPESzfIvCNjLtAJOq8xsg> *“Make the Internet change Vietnamese lives”*