[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2022-05-13 Thread Xiubo Li
On 5/12/22 12:06 AM, Stefan Kooman wrote: Hi List, We have quite a few linux kernel clients for CephFS. One of our customers has been running mainline kernels (CentOS 7 elrepo) for the past two years. They started out with 3.x kernels (default CentOS 7), but upgraded to mainline when those k

[ceph-users] Re: The last 15 'degraded' items take as many hours as the first 15K?

2022-05-13 Thread Janne Johansson
Den fre 13 maj 2022 kl 08:56 skrev Stefan Kooman : > >> > > Thanks Janne and all for the insights! The reason why I half-jokingly > > suggested the cluster 'lost interest' in those last few fixes is that > > the recovery statistics' included in ceph -s reported near to zero > > activity for so lon

[ceph-users] Need advice how to proceed with [WRN] CEPHADM_HOST_CHECK_FAILED

2022-05-13 Thread Kalin Nikolov
Hello, for about a year and a half I have been supporting a cluster of Ceph for my company (v.15.2.3 on centos 8 which is out of support already) that is used only for S3 and until recently there were no serious problems that I could not deal with of a different nature, but the last problem that

[ceph-users] Multi-datacenter filesystem

2022-05-13 Thread Daniel Persson
Hi Team We have grown out of our current solution, and we plan to migrate to multiple data centers. Our setup is a mix of radosgw data and filesystem data. But we have many legacy systems that require a filesystem at the moment, so we will probably run it for some of our data for at least 3-5 yea