On 04/28 03:12, Kent Fredric wrote: > On Sun, 26 Apr 2020 18:15:51 +0200 > tu...@posteo.de wrote: > > > Filesystem Size Used Avail Use% Mounted on > > /dev/root 246G 45G 189G 20% / > > Given that (Size - Used) is roughly 200G, it suggests to me that > perhaps, some process somewhere is creating and deleting a lot of > temporary files on this device (or maybe simply re-writing the same > file multiple times) > > From a userspace, this would be invisible, as the "new" file would be > in a new location on the disk, and the "old" file would be invisible, > and marked "can be overwritten". > > So if you did: > > for i in {0..200}; do > cp a b > rm a > mv b a > done > > Where "a" is a 1G file, I'd expect this to have a *ceiling* of 200G > that would turn up in fstrim output, as once you reached iteration 201, > where "can overwrite" would allow the SSD to go back and rewrite over > the space used in iteration 1. > > While the whole time, the visible disk usage in df -h would never > exceed 46G . > > I don't know if this is what is happening, I don't have an SSD and > don't get to use fstrim. > > But based on what you've said, the results aren't *too* surprising. > > Though its possible the hardware has some internal magic to elide some > writes, potentially making the "cp" action incur very few writes, which > would show up in the smartctl data, but ext4 might not know anything > about that, so perhaps fstrim only indicates what ext4 *tracked* as > being cleaned, while it may have incurred much less cleanup required on > the hardware. > > That would explain the difference between smartctl and fstrim results. > > Maybe compare smartctl output over time with > /sys/fs/ext4/<device>/session_write_kbytes and see if one grows faster > than the other? :) > > My local session_write_kbytes is currently at 709G, the partition its > for is only 552G, with 49G space, and its been booted 33 days, so "21G > of writes a day". > > And uh, lifetime_write_kbytes is about 18TB. Yikes. > > ( compiling things involves a *LOT* of ephemeral data ) > > Also, probably don't assume the amount of free space on your partition > is all the physical device has at its disposal to use. It seems > possible that on the hardware, the total pool of "free blocks" is > arbitrarily usable by the device for wear levelling, and a TRIM command > to that device could plausibly report more blocks trimmed than your > current partition size, depending on how its implemented. > > But indeed, lots of speculation here on my part :) >
Hi Kent, Thank yopu very much for your research and your explanations! :) Due to some statements I found online I did a interesting little experiment: fstrim fstrim reboot fstrim Reported amount of data for each fstrim: 200.2GiB 0.0GiB -- 200.2GiB The reboot seems to be worth the same amount of fstrimmed data as one week of daily updates and recompilations. ;) (By the way: This all happens on a ext4 filesystem). Background according to the reports I found online: The kernel is keep track of all, which already has been fstrimmed and avoids to retrimm the same data. This knowledge gets lost, when the PC is powercycled or rebooted. I think, the value of the amount of fstrimmed data does not reflect the amount of data, which gets physically fstrimmed by the SSD controller. The kernel onlu throw the information of "possible candidates for being fstrimmed" towards the SSD controller, which is real master behind all this. And as you wrote: The maximum amount of "possible data for being fstrimmed" is all free space of the filesystem. Slightly related: Do you know the purpose of this value (smartctl -a <device>): Data Units Read: 656,599 [336 GB] Data Units Written: 702,251 [359 GB] Host Read Commands: 4,316,042 Host Write Commands: 3,080,180 Are these the raw amount of data, I have send to the SSD? Looks like a lot... Cheers! Meino