Hi Experts,In my new cluster, each of my storage nodes have 6x PM1643 Samsung
SSD with P420i Raid Controller in HBA Mode.My Main concern is P420i working in
HBA mode become bottleneck in IOPs & throughput or not.Each PM1643 support 30k
write and 6 count of PM1643 result in 180k iops (30k * 6).I
Hi Experts,I am seeking for if there is achievable significant write
performance improvements when separating WAL/DB in a ceph cluster with all SSD
type OSD.I have a cluster with 40 SSD (PM1643 1.8 TB SSD Enterprise Samsung). I
have 10 Storage node each with 4 OSD. I want to know that can I get
y are used
as compute nodes, but another problem we've had is spontaneous node restarts
during soft RAID 1 MD sync (also under high workloads). Many controllers had to
be replaced during the warranty period.
BR,
Sebastian
> On 1 Jan 2023, at 16:48, hosseinz8...@yahoo.com wrote:
>
Hi Experts,For my new ceph cluster, my existing storage nodes have Smart Array
Raid Controller P420i (HP G8). I have 6 Enterprise-SSD Disks for every storage
Node.From your experiences, activating HBA Mode is better than Raid 0 or not?I
know that Raid Controller and Raid Controller Cache are not
ache was gone, optimize is proceed. This is not enterprise device, you
should never use it with Ceph 🙂
k
Sent from my iPhone
> On 27 Dec 2022, at 16:41, hosseinz8...@yahoo.com wrote:
>
> Thanks AnthonyI have a cluster with QLC SSD disks (Samsung QVO 860). The
> cluster works for
Hello Everyone.I want to create ceph cluster with 50 OSD. I am looking for the
best enterprise SSD disk model which has high IOPs but acceptable price.Which
disk Brand & Model do you know to introduce me to buy?
Thanks.
___
ceph-users mailing list -- ce
ocket),
only that OSD executes, there is no replication. Replication is a function of
PGs.
Thus, this is a narrowly-focused tool with both unique advantages and
disadvantages.
> On Dec 26, 2022, at 12:47 PM, hosseinz8...@yahoo.com wrote:
>
> Hi experts,I want to know, when I exe
Hi experts,I want to know, when I execute ceph tell osd.x bench command, is
replica 3 considered in the bench or not? I mean, for example in case of
replica 3, when I executing tell bench command, replica 1 of bench data write
to osd.x, replica 2 write to osd.y and replica 3 write to osd.z? If t
@MarcThanks Marc,FIO is executed and the result attached to this email. But
what is consuming me is: tell bench sometimes return for example 2~10 and some
times return 170~200.If the disk is burned out why sometimes return higher
value?Currently this OSD is weighted 0. So there are no any load
@MarcThanks Marc,I am executing your profile via fio. I will send you the
result. But what is consuming me is: tell bench sometimes return for example
2~10 and some times return 170~200.If the disk is burned out why sometimes
return higher value?Currently this OSD is weighted 0. So there are no
Hi experts.In one of my ceph cluster, some of my OSDs has dramatically slow
IOPs when executing tell bench command. All of the OSDs are SSD and we have 2
types of SSD disks: 1) ordinary SSD 2) Enterprise SSDs
In my cluster, there are several OSDs of type ordinary SSD with very slow
iops.The resu
11 matches
Mail list logo