There’s more to it than bottlenecking.
RAS, man. RAS.
> On Jul 12, 2024, at 3:58 PM, John Jasen wrote:
>
> How large of a ceph cluster are you planning on building, and what network
> cards/speeds will you be using?
>
> A lot of the talk about RAID HBA pass-through being sub-optimal probably
How large of a ceph cluster are you planning on building, and what network
cards/speeds will you be using?
A lot of the talk about RAID HBA pass-through being sub-optimal probably
won't be your bottleneck unless you're aiming for a large cluster at
100Gb/s speeds, in my opinion.
On Fri, Jul 12, 2
> Okay it seems like we don't really have a definitive answer on whether it's
> OK to use a RAID controller or not and in what capacity.
It’s okay to use it if that’s what you have.
For new systems, eschew the things. They cost money for something you can do
with MD for free and are finicky.
Okay it seems like we don't really have a definitive answer on whether it's OK
to use a RAID controller or not and in what capacity.
Passthrough meaning:
Are you saying that it's OK to use a raid controller where the disks are in
non-RAID mode?
Are you saying that it's OK to use a raid controll
>
> Isn’t the supported/recommended configuration to use an HBA if you have to
> but never use a RAID controller?
That may be something I added to the docs. My contempt for RAID HBAs knows no
bounds ;)
Ceph doesn’t care. Passthrough should work fine, I’ve done that for tends of
thousands
I’ve replaced R640 drive backplanes (off ebay) to use U.2 NVMe instead of RAID. Yes, I had to replace the backplane in order to talk to NVMe and in that work it removes exposure to RAID. peter On 7/11/24, 2:25 PM, "Drew Weaver" wrote:Hi, Isn’t the supported/recommended configuration to use an HBA
Hi,
>I don't think the motherboard has enough PCIe lanes to natively connect all
>the drives: the RAID controller effectively functioned as a expander, so you
>needed less PCIe lanes on the motherboard.
>As the quickest way forward: look for passthrough / single-disk / RAID0
>options, in that o
Hi,
Isn’t the supported/recommended configuration to use an HBA if you have to but
never use a RAID controller?
The backplane is already NVMe as the drives installed in the system currently
are already NVMe.
Also I was looking through some diagrams of the R750 and it appears that if you
order
Hi,
I'm a bit confused by your question the 'drive bays' or backplane is the same
for an NVMe system, it's either a SATA/SAS/NVME backplane or a NVMe backplane.
I don't understand why you believe that my configuration has to be 3.5" as it
isn't. It's a 16x2.5" chassis with two H755N controllers
Agree with everything Robin wrote here. RAID HBAs FTL. Even in passthrough
mode, it’s still an [absurdly expensive] point of failure, but a server in the
rack is worth two on backorder.
Moreover, I’m told that it is possible to retrofit with cables and possibly an
AIC mux / expander.
e.g.
ht
On Thu, Jul 11, 2024 at 01:16:22PM +, Drew Weaver wrote:
> Hello,
>
> We would like to repurpose some Dell PowerEdge R750s for a Ceph cluster.
>
> Currently the servers have one H755N RAID controller for each 8 drives. (2
> total)
The N variant of H755N specifically? So you have 16 NVME driv
retrofitting the guts of a Dell PE R7xx server is not straightforward. You
could be looking into replacing the motherboard, the backplane, and so
forth.
You can probably convert the H755N card to present the drives to the OS, so
you can use them for Ceph. This may be AHCI mode, pass-through mode,
Hi Drew,
as far as I know Dell's drive bays for RAID controllers are not the same as the
drive bays for CPU attached disks. In particular, I don't think they have that
config for 3.5" drive bays and your description sounds a lot like that's what
you have. Are you trying to go from 16x2.5" HDD t
13 matches
Mail list logo