[ceph-users] Re: Clients failing to respond to capability release

2023-10-12 Thread Tim Bishop
s. > > We would be very interested to hear about the rest of the community's > experience in relation to this and I would recommend looking at your > underlying OSDs Tim to see whether there are any timeout or uncorrectable > errors. We would also be very eager to hear if these

[ceph-users] Re: Clients failing to respond to capability release

2023-09-20 Thread Tim Bishop
Hi Stefan, On Wed, Sep 20, 2023 at 11:00:12AM +0200, Stefan Kooman wrote: > On 19-09-2023 13:35, Tim Bishop wrote: > > The Ceph cluster is running Pacific 16.2.13 on Ubuntu 20.04. Almost all > > clients are working fine, with the exception of our backup server. This > > is us

[ceph-users] Clients failing to respond to capability release

2023-09-19 Thread Tim Bishop
Hi, I've seen this issue mentioned in the past, but with older releases. So I'm wondering if anybody has any pointers. The Ceph cluster is running Pacific 16.2.13 on Ubuntu 20.04. Almost all clients are working fine, with the exception of our backup server. This is using the kernel CephFS client

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Tim Bishop
e the balancer module turn on (ceph balancer > status) should tell you that as well. > > If you have enough pgs in the bigger pools and the balancer module is on, > you shouldht have to manually reweight osd's. > > -Joseph > > On Mon, Oct 24, 2022 at 9:13

[ceph-users] Re: Advice on balancing data across OSDs

2022-10-24 Thread Tim Bishop
Hi Josh, On Mon, Oct 24, 2022 at 07:20:46AM -0600, Josh Baergen wrote: > > I've included the osd df output below, along with pool and crush rules. > > Looking at these, the balancer module should be taking care of this > imbalance automatically. What does "ceph balancer status" say? # ceph balan

[ceph-users] Advice on balancing data across OSDs

2022-10-24 Thread Tim Bishop
Hi all, ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable) We're having an issue with the spread of data across our OSDs. We have 108 OSDs in our cluster, all identical disk size, same number in each server, and the same number of servers in each rack. So I'd hoped we

[ceph-users] Re: telemetry.ceph.com certificate expired

2020-04-15 Thread Tim Bishop
rify failed')],​)",​),​)); > > Seems certificate expired yesterday (14th april). > > Cheers > Eneko -- Tim Bishop http://www.bishnet.net/tim/ PGP Key: 0x6C226B37FDF38D55 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io