Rich Freeman wrote:
> On Thu, Nov 14, 2024 at 6:10 PM Dale <rdalek1...@gmail.com> wrote:
>> The biggest downside to the large drives available now, even if SMART
>> tells you a drive is failing, you likely won't have time to copy the
>> data over to a new drive before it fails.  On a 18TB drive, using
>> pvmove, it can take a long time to move data.
> Very true.  This is why I'm essentially running RAID6.  Well, that and
> for various reasons you don't want to allow writes to Ceph without at
> least one drive worth of redundancy, so having an extra replica means
> that you can lose one and remain read-write, and then if you lose a
> second during recovery you might be read-only but you still have data
> integrity.  (Don't want to get into details - it is a Ceph-specific
> issue.)


I think I did some math on this once.  I'm not positive on this and it
could vary depending on system ability of moving data.  I think about
8TB is as large as you want if you get a 24 hour notice from SMART and
see that notice fairly quickly to act on.  Anything beyond that and you
may not have enough time to move data, if the data is even good still. 


>> I don't even want to think what it would cost to put
>> all my 100TBs or so on SSD or NVME drives.  WOW!!!
> # kubectl rook-ceph ceph osd df class ssd
> ID  CLASS  WEIGHT    REWEIGHT  SIZE     RAW USE  DATA     OMAP
> META     AVAIL    %USE   VAR   PGS  STATUS
>  8    ssd   6.98630   1.00000  7.0 TiB  1.7 TiB  1.7 TiB   63 MiB  3.9
> GiB  5.3 TiB  24.66  1.04  179      up
>  4    ssd   1.74660   1.00000  1.7 TiB  465 GiB  462 GiB   16 MiB  2.5
> GiB  1.3 TiB  25.99  1.10   45      up
> 12    ssd   1.74660   1.00000  1.7 TiB  547 GiB  545 GiB   30 MiB  2.1
> GiB  1.2 TiB  30.57  1.29   52      up
>  1    ssd   6.98630   1.00000  7.0 TiB  1.7 TiB  1.7 TiB   50 MiB  4.2
> GiB  5.3 TiB  24.42  1.03  177      up
>  5    ssd   6.98630   1.00000  7.0 TiB  1.8 TiB  1.8 TiB   24 MiB  5.0
> GiB  5.2 TiB  25.14  1.07  180      up
>  3    ssd   1.74660   1.00000  1.7 TiB  585 GiB  583 GiB   18 MiB  2.0
> GiB  1.2 TiB  32.70  1.39   57      up
> 21    ssd   1.74660   1.00000  1.7 TiB  470 GiB  468 GiB   27 MiB  1.9
> GiB  1.3 TiB  26.26  1.11   52      up
>  9    ssd   1.74660   1.00000  1.7 TiB  506 GiB  504 GiB   11 MiB  2.0
> GiB  1.3 TiB  28.29  1.20   49      up
> 18    ssd   1.74660   1.00000  1.7 TiB  565 GiB  563 GiB   16 MiB  1.7
> GiB  1.2 TiB  31.59  1.34   55      up
> 10    ssd   1.74660   1.00000  1.7 TiB  490 GiB  489 GiB   28 MiB  1.6
> GiB  1.3 TiB  27.42  1.16   53      up
> 22    ssd   1.74660   1.00000  1.7 TiB  479 GiB  478 GiB   19 MiB  1.7
> GiB  1.3 TiB  26.80  1.14   50      up
> 19    ssd  13.97249   1.00000   14 TiB  2.3 TiB  2.3 TiB   87 MiB  5.2
> GiB   12 TiB  16.81  0.71  262      up
>                         TOTAL   49 TiB   12 TiB   12 TiB  388 MiB   34
> GiB   37 TiB  23.61
>
> I'm getting there.  Granted, at 3+2 erasure coding that's only a bit
> over 30TiB usable space.
>


The thing about my data, it's mostly large video files.  If I were
storing documents or something, then SSD or something would be a good
option.  Plus, I mostly write once, then it either sits there a while or
gets read on occasion.  I checked and I have some 56,000 videos.  That
doesn't include Youtube videos.  This is also why I wanted to use that
checksum script. 

I do wish there was a easy way to make columns work when we copy and
paste into email.  :/ 

Dale

:-)  :-) 

Reply via email to