[ceph-users] Adding OSD's results in slow ops, inactive PG's

2024-01-17 Thread Ruben Vestergaard
pool, placed on spinning rust, some 200-ish disks distributed across 13 nodes. I'm not sure if other pools break, but that particular 4+2 EC pool is rather important so I'm a little wary of experimenting blindly. Any thoughts on where to look next? Thanks, Ruben Vestergaard [1]

[ceph-users] XFS on top of RBD, overhead

2024-02-02 Thread Ruben Vestergaard
ing transferred over the network. Overhead, sure, but nowhere near what I expected, which was 4 MiB per block of data "hit" in the backend. Is the RBD client performing partial object reads? Is that even a thing? Cheers, Ruben Vestergaard _

[ceph-users] Re: XFS on top of RBD, overhead

2024-02-02 Thread Ruben Vestergaard
On Fri, Feb 02 2024 at 07:51:36 -0700, Josh Baergen wrote: On Fri, Feb 2, 2024 at 7:44 AM Ruben Vestergaard wrote: Is the RBD client performing partial object reads? Is that even a thing? Yup! The rados API has both length and offset parameters for reads (https://docs.ceph.com/en/latest