[ceph-users] Re: Hardware for new OSD nodes.

2020-10-24 Thread Dave Hall
Eneko and all, Regarding my current BlueFS Spillover issues, I've just noticed in https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/ that it says: If there is only a small amount of fast storage available (e.g., less than a gigabyte), we recommend using it as a WAL dev

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Brian Topping
Yes the UEFI problem with mirrored mdraid boot is well-documented. I’ve generally been working with BIOS partition maps which do not have the single point of failure UEFI has (/boot can be mounted as mirrored, any of them can be used as non-RAID by GRUB). But BIOS maps have problems as well with

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
Hi Dave, El 22/10/20 a las 19:43, Dave Hall escribió: El 22/10/20 a las 16:48, Dave Hall escribió: (BTW, Nautilus 14.2.7 on Debian non-container.) We're about to purchase more OSD nodes for our cluster, but I have a couple questions about hardware choices.  Our original nodes were 8 x 12T

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
Hi Brian, El 22/10/20 a las 18:41, Brian Topping escribió: On Oct 22, 2020, at 10:34 AM, Anthony D'Atri wrote: - You must really be sure your raid card is dependable. (sorry but I have seen so much management problems with top-tier RAID cards I avoid them like the plague). This. I’d

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-23 Thread Eneko Lacunza
Hi Anthony, El 22/10/20 a las 18:34, Anthony D'Atri escribió: Yeah, didn't think about a RAID10 really, although there wouldn't be enough space for 8x300GB = 2400GB WAL/DBs. 300 is overkill for many applications anyway. Yes, but he has spillover with 1600GB/12 WAL/DB. Seems he can make use

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Eneko Lacunza
Hi Brian, El 22/10/20 a las 17:50, Brian Topping escribió: On Oct 22, 2020, at 9:14 AM, Eneko Lacunza > wrote: Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1 NVMe drive for 2  SAS drives  and provision 300GB for WAL/DB for each OSD (see rel

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Eneko Lacunza
Hi Dave, El 22/10/20 a las 16:48, Dave Hall escribió: Hello, (BTW, Nautilus 14.2.7 on Debian non-container.) We're about to purchase more OSD nodes for our cluster, but I have a couple questions about hardware choices.  Our original nodes were 8 x 12TB SAS drives and a 1.6TB Samsung NVMe car

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Dave Hall
Eneko, On 10/22/2020 11:14 AM, Eneko Lacunza wrote: Hi Dave, El 22/10/20 a las 16:48, Dave Hall escribió: Hello, (BTW, Nautilus 14.2.7 on Debian non-container.) We're about to purchase more OSD nodes for our cluster, but I have a couple questions about hardware choices.  Our original nodes

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Brian Topping
> On Oct 22, 2020, at 10:34 AM, Anthony D'Atri wrote: > >>- You must really be sure your raid card is dependable. (sorry but I have >> seen so much management problems with top-tier RAID cards I avoid them like >> the plague). > > This. I’d definitely avoid a RAID card. If I can do adva

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Anthony D'Atri
> Yeah, didn't think about a RAID10 really, although there wouldn't be enough > space for 8x300GB = 2400GB WAL/DBs. 300 is overkill for many applications anyway. > > Also, using a RAID10 for WAL/DBs will: > - make OSDs less movable between hosts (they'd have to be moved all > together -

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Anthony D'Atri
> Also, any thoughts/recommendations on 12TB OSD drives? For price/capacity > this is a good size for us Last I checked HDD prices seemed linear from 10-16TB. Remember to include the cost of the drive bay, ie. the cost of the chassis, the RU(s) it takes up, power, switch ports etc. I’ll gu

[ceph-users] Re: Hardware for new OSD nodes.

2020-10-22 Thread Brian Topping
> On Oct 22, 2020, at 9:14 AM, Eneko Lacunza wrote: > > Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1 NVMe > drive for 2 SAS drives and provision 300GB for WAL/DB for each OSD (see > related threads on this mailing list about why that exact size). > > This way if a