On Wed, Feb 11, 2015 at 6:43 AM, Janne Johansson <icepic...@gmail.com>
wrote:

> You can invent how many journals and whatevers you like to hope to prevent
> the state from being inconsistent, but broken or breaking sectors will
> sooner or later force you to run over all files and read/check them, and
> in that case
> you will need lots of ram anyhow.
>

The data in this thread seems to show that this is not true.

4TB fs with 1,642 files = 83MB of RAM, ~60 seconds

4TB fs with 3,900,811 files = 137MB of RAM, > 30 minutes

(Sure, on some platforms, 137MB is a lot of RAM but I don't think we're
talking about.)

Granted it's only two data points, but when number of files went up by
2375x, time to fsck went up by ~60x however RAM usage only went up by
1.7x.  It seems as if increase in number of files requires only a modest
increase in RAM.  (Small disclaimer: we don't know platforms involved).

On Wed, Feb 11, 2015 at 8:58 AM, Jan Stary <h...@stare.cz> wrote:

> FAQ4 still says
>
>   If you make very large partitions, keep in mind that performing
>   filesystem checks using fsck(8) requires about 1M of RAM per gigabyte of
>   filesystem size
>   ^^^^^^^^^^^^^^^
>
> Does that still apply?
>

A 4TB filesystem would mean 4GB of RAM, and neither fsck in the examples
above was close to that.

-- 
andrew fabbro
and...@fabbro.org
blog: https://raindog308.com

Reply via email to