@Alan: I saw your answer for my issue #2023
<https://github.com/apache/trafficserver/issues/2023>. To handle range
requests, we must efficiently know which fragments to read. By using
fragment tables, we can achieve that purpose.

For example: if we serve a range request of bytes 1000-2000, with fragment
tables, we know the Nth fragment (from the current one) containing
requested data. By continuously applying hash functions on the request's
key N times, we may effectively locate the fragment.

My question is: if we split a specific object into multiple fragments
having the same size, are fragment tables useless? Because we can quickly
have:

1000/fragmentSize <= N = 2000 / fragmentSize



On Fri, Jun 2, 2017 at 8:29 PM, Alan Carroll <
solidwallofc...@yahoo-inc.com.invalid> wrote:

> Correct. For a specific fragment, the size is adjusted to be approximately
> the size of the content in the fragment. When we talk of "fragment size" we
> generally mean the maximum allowed size.
>
>
>
> On Thursday, June 1, 2017, 8:39:00 PM CDT, Anh Le Duc (2) <
> anh...@vng.com.vn> wrote:
>
> @Alan: Fragment sizes are not fixed, right? By the formula in section
> Stripe
> Directory
> <https://docs.trafficserver.apache.org/en/latest/developer-guide/cache-
> architecture/architecture.en.html#stripe-directory>,
> we have adaptive sizes
>
> ( *size* + 1 ) * 2 ^ ( CACHE_BLOCK_SHIFT + 3 * *big* )
>
>
> On Fri, Jun 2, 2017 at 2:05 AM, John Plevyak <jplev...@acm.org> wrote:
>
> > While large objects are not stored contiguously, the chunk size is
> > configurable (as Alan pointed out).
> > Increasing the chunk size increases memory usage and decreases the number
> > of seeks required to
> > read an object.  It does not decease the number of seeks required to
> write
> > the object because
> > we use a write buffer which is separately sized for write aggregation.
> >
> > The default chunk size is set such that for spinning media (HDDs) the
> > amount of time spent reading
> > the object is dominated by transfer time, meaning that total disk time
> will
> > decrease by only
> > a small amount if the chunk size is increased.  Indeed for SSDs, the
> chunk
> > size can be
> > decreased to free up more memory for the RAM cache and to decrease the
> > number of
> > different block sizes.
> >
> > On Thu, Jun 1, 2017 at 5:46 AM, Alan Carroll <
> > solidwallofc...@yahoo-inc.com.invalid> wrote:
> >
> > > You might try playing with the expected fragment size. That's tunable
> and
> > > you can get a partial effect of more contiguous fragments by making it
> > > larger, although I think the absolute maximum is 16M. This doesn't cost
> > > additional disk space as it is a maximum fragment size, not a forced
> one.
> > >
> > >
> >
>
>
>
> --
>
> *Anh Le (Mr.)*
>
> *Senior Software Engineer*
>
> *Zalo Technical Dept., Zalo Group, **VNG Corporation*
>
> 5th floor, D29 Building, Pham Van Bach Street, Hanoi, Vietnam
>
> *M:* (+84) 987 816 461
>
> *E:* anh...@vng.com.vn
>
> *W: *www.vng.com.vn
> <http://www.google.com/url?q=http%3A%2F%2Fwww.vng.com.vn&;
> sa=D&sntz=1&usg=AFQjCNHYo7I_1mPESzfIvCNjLtAJOq8xsg>
>
> *“Make the Internet change Vietnamese lives”*




-- 

*Anh Le (Mr.)*

*Senior Software Engineer*

*Zalo Technical Dept., Zalo Group, **VNG Corporation*

5th floor, D29 Building, Pham Van Bach Street, Hanoi, Vietnam

*M:* (+84) 987 816 461

*E:* anh...@vng.com.vn

*W: *www.vng.com.vn
<http://www.google.com/url?q=http%3A%2F%2Fwww.vng.com.vn&sa=D&sntz=1&usg=AFQjCNHYo7I_1mPESzfIvCNjLtAJOq8xsg>

*“Make the Internet change Vietnamese lives”*

Reply via email to