[ceph-users] Re: Issues with Ceph Cluster Behavior After Migration from ceph-ansible to cephadm and Upgrade to Quincy/Reef

2025-03-19 Thread Jeremi-Ernst Avenant
throughput. On production it goes from ~25 GBps to +- 100 Mbps and on testbed from 700 Mbps to 100 Mbps." Regards On Mon, Feb 17, 2025 at 11:54 PM Eugen Block wrote: > Hi, > > that's an interesting observation, I haven't heard anything like that > yet. More respon

[ceph-users] Issues with Ceph Cluster Behavior After Migration from ceph-ansible to cephadm and Upgrade to Quincy/Reef

2025-02-17 Thread Jeremi-Ernst Avenant
e encountered these behaviors? Are there any known bugs or workarounds that could help restore expected OSD state tracking and balancer efficiency? Any insights would be greatly appreciated! Thanks, -- *Jeremi-Ernst Avenant, Mr.*Cloud Infrastructure Specialist Inter-University Institute

[ceph-users] CephFS limp mode when fullest OSD is between nearfull & backfillfull value

2025-06-19 Thread Jeremi-Ernst Avenant
rgs '--osd_backfillfull_ratio=0.90' ceph tell osd.$osd injectargs '--osd_full_ratio=0.95' URL to the issue: https://tracker.ceph.com/issues/70129 Any ideas would be greatly appreciated. -- *Jeremi-Ernst Avenant, Mr.*Cloud Infrastructure Specialist Inter-University Insti

[ceph-users] Re: CephFS limp mode when fullest OSD is between nearfull & backfillfull value

2025-07-28 Thread Jeremi-Ernst Avenant
lfull_ratio 55% and fullest disk 53%. Then it still goes into limp mode. Regards On Thu, Jun 19, 2025 at 9:03 AM Jeremi-Ernst Avenant wrote: > Good day > > We've been struggling with this issue since we've upgraded post 16.2.11 to > 16.2.15 and now up to Reef 18.2.7