Michael wrote:
> On Sunday 2 February 2025 02:07:07 Greenwich Mean Time Rich Freeman wrote:
>> On Sat, Feb 1, 2025 at 8:40 PM Dale <rdalek1...@gmail.com> wrote:
>>> Rich Freeman wrote:
>>>> Now, if you were running btrfs or cephfs or some other exotic
>>>> filesystems, then it would be a whole different matter,
>>> I could see
>>> some RAID systems having issues but not some of the more advanced file
>>> systems that are designed to handle large amounts of data.
>> Those are "RAID-like" systems, which is part of why they struggle when
>> full.  Unlike traditional RAID they also don't require identical
>> drives for replication, which can also make it tricky when they start
>> to get full and finding blocks that meet the replication requirements
>> is difficult.
>>
>> With a COW approach like btrfs you also have the issue that altering
>> the metadata requires free space.  To delete a file you first write
>> new metadata that deallocates the space for the file, then you update
>> the pointers to make it part of the disk metadata.  Since the metadata
>> is stored in a tree, updating a leaf node requires modifying all of
>> its parents up to the root, which requires making new copies of them.
>> It isn't until the entire branch of the tree is copied that you can
>> delete the old version of it.  The advantage of this approach is that
>> it is very safe, and accomplishes the equivalent of full data
>> journaling without actually having to make more than one write of
>> things.  If that operation is aborted the tree just points at the old
>> metadata and the in-progress copies are inside of free space, ignored
>> by the filesystem, and thus they just get overwritten the next time
>> the operation is attempted.
>>
>> For something like ceph it isn't really much of a downside since this
>> it is intended to be professionally managed.  For something like btrfs
>> it seems like more of an issue as it was intended to be a
>> general-purpose filesystem for desktops/etc, and so it would be
>> desirable to make it less likely to break when it runs low on space.
>> However, that's just one of many ways to break btrfs, so...  :)
>>
>> In any case, running out of space is one of those things that becomes
>> more of an issue the more complicated the metadata gets.  For
>> something simple like ext4 that just overwrites stuff in place by
>> default it isn't a big deal at all.
> I've had /var/cache/distfiles on ext4 filling up more than a dozen times, 
> because I forgot to run eclean-dist and didn't get a chance to tweak 
> partitions to accommodate a larger fs in time.  Similarly, I've also had / on 
> ext4 filling up on a number of occasions over the years.  Both of my ext4 
> filesystems mentioned above were created with default options.  Hence -m, the 
> reserved blocks % for the OS, would have been 5%.  I cannot recall ever 
> losing 
> data or ending up with a corrupted fs.  Removing some file(s) to create empty 
> space allowed the file which didn't fit before to be written successfully and 
> that was that.  Resuming whatever process was stopped (typically emerge) 
> allowed it to complete.
>
> I also had smaller single btrfs partitions binding up a couple of times.  I 
> didn't lose any data, but then again these were stand alone filesystems not 
> part of some ill advised buggy btrfs RAID5 configuration.
>
> I don't deal with data volumes of the size Dale is playing with, so I can't 
> comment on suitability of different filesystems for such a use case.
>

This is all good info.  It's funny, I think me using ext4 was the best
thing for this situation.  It works well for storing all the files I
have plus I can fill it up pretty good if I have too.  Thing is, I could
stop the download of some videos if needed.  At least until I can get a
new drive to expand with. 

I'll be getting another drive for Crypt next.  Then I should be good for
a while. 

Dale

:-)  :-) 

Reply via email to