Rich Freeman wrote: > On Sun, Jan 26, 2025 at 1:15 PM Dale <rdalek1...@gmail.com> wrote: >> Still, given the large >> file systems in use, where should I draw the line and remain safe data >> wise? Can I go to 90% if needed? 95%? Is that to much despite the >> large amount of space remaining? Does the percentage really matter? Is >> it more about having enough space for the file system to handle its >> internal needs? Is for example 1 or 2TBs of free space enough for the >> file system to work just fine? > It really depends on the filesystem. For ext4 you can go all the way > to zero bytes free with no issues, other than application issues from > being unable to create files. (If you're talking about the root > filesystem, then the "application" includes the OS so that isn't > ideal.) > > Some filesystems don't handle running out of space well. These are > usually filesystems that handle redundancy internally, but you really > need to look into the specifics. Are you running something other than > ext4 here? > > The space free almost always takes into account filesystem overhead. > The issue is generally whether the filesystem can actually free up > space once it runs out completely (COW might want to allocate space > just to delete things, due to the need to not overwrite metadata in > place). > > -- > Rich > > . >
On the root file system, I have it set to some reserved for root, admin if you will, same for /var I think. I always leave that at the default for things OS related. I think I left a little for /home itself but not sure. However, for Data, Crypt and some others, I set the reserve to 0. Nothing on those are used by root or any OS related data. It's the -m option for ext4, I think. My thinking, even if I went to 95%, it should be OK given my usage. It might even be OK at 99%. Thing is, I know at some point, something is going to happen. I just been wondering what that point is and what it will do. Oh, I do use ext4. I might add, when something goes weird and the messages file is getting wrote to a lot and fills up /var, even using all the reserve, it still works but it can't update with new data. It doesn't crash my system or anything bad but it can't be good to try to write to a file when there is no space left. I know it would be different for Data and Crypt but still, when full, something has to happen. Most modern file systems I think can handle this a lot better than when the drives were smaller and file systems were not as robust. Heck, ext4, btfs <sp?>, zfs and others are designed nowadays to handle some pretty bad situations but I still don't want to push my luck to much. Thanks for the info. I can't believe I didn't mention before that I was using ext4. o_O Dale :-) :-)