Mark Knecht wrote:
>
>
> On Sat, Feb 1, 2025 at 2:16 PM Dale <rdalek1...@gmail.com
> <mailto:rdalek1...@gmail.com>> wrote:
> >
> > <SNIP>
> >
> > Hard to believe no one has more up to date info on what is safe given
> > drives are so large now and file system improvements.  I'd think having
> > a TB or two would be plenty, regardless of percentage, but not real
> > sure.  Don't want to risk data testing the theory either.
> >
> > Update:  The new drive came in.  It passed all the tests and is online.
> > dfc looks like this now for Data.
> >
>
> OK, I hate to even try to answer this, and first, I have no storage design
> experience but I suspect it depends a lot on YOUR usage. I see Rich 
> provided an answer while I was writing this so you'll want to follow any
> advice he might have given. He's smart. I'm not.
>
> My guess is that while you 'store' a lot of data you don't actually
> 'change'
> a lot of data. For instance, in the past, you seemed to be download
> YouTube
> videos. If you've saved them, never to watch them or change them, then
> other than protecting yourself from losing them, they go onto the disk and
> never move. If that's your usage model I don't know why you can't go 
> right up to 100% minus just a little. (Say 10x the size of your
> average file)
> After all, you could always remove a few files to temp storage, optimize
> the disk and then re-add the files.
>
> On the other hand, if you're deleting files in the middle of the drive
> I could
> see cases where new files get fragmented and stuff you put on late in
> life gets strewn around the drive which doesn't sound great.
>
> In a completely different usage case, like you're running a bunch of
> databases
> that are filling your drive, removing old records, adding new records
> all the
> time, then depending on how your disk optimizations run you might need
> a huge amount of space to gather the databases back together. However
> even in that case you could move a complete database to a temp location,
> optimize the drive and then re-add the database.
>
> So, as is often the case, in my mind...IT DEPENDS! ;-)
>
> Best wishes,
> Mark


This is sort of my thinking as well.  I do update/delete/move files on
occasion but it is usually done in small chunks and mostly by hand.  I
use ext4 and given the slow speed of this, I'm sure the file system has
more than enough time to rearrange things.  I have in the past ran the
ext defrag tool.  It usually reports back a low score, usually 0.  Most
files that are fragmented are really small and usually only 4 or 5 of
them. 

I still plan to expand before reaching 90%.  Thing is, something could
happen that makes me have to wait.  I was just curious as to how long I
could wait.  If going past a certain point would/could cause data
problems, I wanted to know what that limit was. 

Now to see what Rich thinks.  I bet he has some ideas.  ;-) 

Dale

:-)  :-) 

Reply via email to