Package: e2fsprogs
Version: 1.46.5
Severity: minor

I did not find this bug in the patchnotes for the latest versions on
e2fsprogs.sourceforge.net/e2fsprogs-release.html, so I assume it is
still present.

I stumbled upon this, because I wanted to specifiy -i 768k for my main
data drive (a 2 TB hard drive) as kind of less "aggressive" option
than -i 1M or -T largefile.

I proceeded to test this behaviour against a file container with
exactly 4 GiB. From what I know the value should increment in steps of
512 for every 64k as this is the boundary for one inode block per
block group which ranges from 16 to 8 in this scenario.

-i 512k: number of inodes: expected 8192 actual 8192 -> ok
-i 576k: number of inodes: expected 7680 actual 7680 -> ok

-i 640k: number of inodes: expected 7168 actual 6656
-i 704k: number of inodes: expected 6656 actual 6144
-i 768k: number of inodes: expected 6144 actual 5632
-i 832k: number of inodes: expected 5632 actual 5120

-i 896k: number of inodes: expected 5120 actual 5120 -> ok
-i 960k: number of inodes: expected 4608 actual 4608 -> ok
-i 1024k or 1M: number of inodes: expected 4096 actual 4096 -> ok


Also I want to say a big thank you for all the great work with Debian
and FOSS software.

Reply via email to