While large objects are not stored contiguously, the chunk size is
configurable (as Alan pointed out).
Increasing the chunk size increases memory usage and decreases the number
of seeks required to
read an object.  It does not decease the number of seeks required to write
the object because
we use a write buffer which is separately sized for write aggregation.

The default chunk size is set such that for spinning media (HDDs) the
amount of time spent reading
the object is dominated by transfer time, meaning that total disk time will
decrease by only
a small amount if the chunk size is increased.  Indeed for SSDs, the chunk
size can be
decreased to free up more memory for the RAM cache and to decrease the
number of
different block sizes.

On Thu, Jun 1, 2017 at 5:46 AM, Alan Carroll <
solidwallofc...@yahoo-inc.com.invalid> wrote:

> You might try playing with the expected fragment size. That's tunable and
> you can get a partial effect of more contiguous fragments by making it
> larger, although I think the absolute maximum is 16M. This doesn't cost
> additional disk space as it is a maximum fragment size, not a forced one.
>
>

Reply via email to