I have a 9 node cluster with 4x of those 20 TB WD drives
(model WUH722020BLE6L4 firmware PQGNW540) and 4x 20 TB Toshiba drives
(model MG10ACA20TE firmware 0104) totaling 8x 20 TB drives per node.
However, they have 300 GB NVMe partitions for DB/WAL. I have not
experienced any issues, but I am still testing different compression
options before using it in production to replace a one-node ZFS file
server. I will only be using it for CephFS and its my first Ceph cluster so
admittedly I'm not sure I have the experience to troubleshoot it as
thoroughly as you have in https://tracker.ceph.com/issues/71927

On Fri, Jul 4, 2025 at 6:42 AM Konstantin Shalygin <k0...@k0ste.ru> wrote:

> Hi,
>
> > On 4 Jul 2025, at 13:14, Marc <m...@f1-outsourcing.eu> wrote:
> >
> > How is it worse than any other hdd of that size?
>
> At the moment we have just under 3000pcs of Toshiba MG10ACA and we have
> not registered any such issues
>
>
>
> k
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Ryan Sleeth
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to