Hey Marek, If you have been using the read balancer, you may see `pg_upmap_primary` mappings in the osdmap despite changing the mode back to just upmap. Changing the mode to "upmap" only means that no further read balancing will occur, so at this time, the mappings need to be manually cleared if you don't want them anymore. These mappings are reef features, so this is likely what the cluster is detecting.
To remove the mappings, first run: `ceph osd dump` Then, for each PG that has a pg_upmap_primary mapping, you may remove it with: `ceph osd rm-pg-upmap-primary <pgid>`. You may alternatively find this script helpful: https://github.com/ljflores/ceph_read_balancer_2023/blob/main/remove_pg_upmap_primaries.sh In the next point releases of Reef and Squid, we will offer a new command to remove all the mappings at once (ceph osd rm-pg-upmap-primary-all), which is tracked here: https://tracker.ceph.com/issues/67179 Thanks, Laura On Fri, Mar 21, 2025 at 11:44 AM Marek Szuba <scriptkid...@wp.pl> wrote: > Dear fellow Ceph users, > > I run a Ceph cluster providing CephFS to a medium-sized Linux server > farm. Originally we used the kernel driver (which on the distro we use > shows itself to the cluster as a Luminous client) to mount the file > system, however at the time of the upgrade to Squid we became aware of > the data-corruption bug associated with the use of root squash and > subsequently switched to (the Squid version of) ceph-fuse. Furthermore, > I took advantage of the switch to bump the OSD require-min-compat-client > to Reef and subsequently switch the balancer mode to upmap-read. > > Weeks passed and I became aware that despite all the attempted tuning, > certain work loads which performed fine using the kernel driver perform > _extremely_ poorly using ceph-fuse. A decision has therefore been made > to relocate the servers which absolutely must have root squash to a > different network-storage solution, switch root squash off for all > CephFS clients and revert to the kernel driver. > > Unfortunately while removing client_mds_auth_caps from CephFS > required_client_features and switching the balancer mode back to upmap > went without any problems, "ceph osd set-require-min-compat-client > luminous" fails with > > Error EPERM: osdmap current utilizes features that require reef; cannot > set require_min_compat_client below that to luminous > > Is there a way of making the osdmap Luminous-compatible again without > losing any data stored on the cluster ()? > > Thank you in advance for your help! > > -- > MS > _______________________________________________ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > > -- Laura Flores She/Her/Hers Software Engineer, Ceph Storage <https://ceph.io> Chicago, IL lflo...@ibm.com | lflo...@redhat.com <lflo...@redhat.com> M: +17087388804 _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io