[ceph-users] Re: Single ceph client usage with multiple ceph cluster

2021-12-14 Thread Markus Baier
Hello Mosharaf, yes, that's no problem. On all of my clusters I did not have a ceph.conf in in the /etc/ceph folders on my nodes at all. I have a .conf, .conf, .conf ... configuration file in the /etc/ceph folder. One config file for each cluster. The same for the different key files e.g. .mo

[ceph-users] How to clean up data in OSDS

2021-12-14 Thread Nagaraj Akkina
Hello Team, After testing our cluster we removed and recreated all ceph pools which actually cleaned up all users and buckets, but we can still see data in the disks. is there a easy way to clean up all osds without actually removing and reconfiguring them? what can be the best way to solve this

[ceph-users] Re: Support for alternative RHEL derivatives

2021-12-14 Thread Manuel Lausch
We switched some month ago from CentOS 7 and 8 to Oracle Linux 8. They promise to be 100% compatible with RHEL. I hope the provided ceph packages will work with this in the future. I wouldn't be lucky with the container solution. Manuel On Mon, 13 Dec 2021 15:01:06 + Benoit Knecht wrote:

[ceph-users] Ceph RESTful APIs and managing Cephx users

2021-12-14 Thread Michał Nasiadka
Hello, I’ve been investigating using Ceph RESTful API in Pacific to create Cephx users (along with a keyring) but it seems the functionality is not there. The documentation shows /api/user calls - but those seem to be related to Ceph Dashboard users? Is there a plan to add that functionality?

[ceph-users] Re: Ceph container image repos

2021-12-14 Thread Gregory Farnum
I generated a quick doc PR so this doesn't trip over other users: https://github.com/ceph/ceph/pull/44310. Thanks all! -Greg On Mon, Dec 13, 2021 at 10:59 AM John Petrini wrote: > > "As of August 2021, new container images are pushed to quay.io > registry only. Docker hub won't receive new conten

[ceph-users] Re: Request for community feedback: Telemetry Performance Channel

2021-12-14 Thread Laura Flores
Hi Gregory, It was intentional that I sent this email to the ceph-users list. The telemetry module is designed as a relationship between developers and users, where developers decide on the metrics to collect, and users decide whether or not to opt in. Since the performance channel will be a new a

[ceph-users] Shall i set bluestore_fsck_quick_fix_on_mount now after upgrading to 16.2.7 ?

2021-12-14 Thread Christoph Adomeit
Hi, I remember there was a bug in 16.2.6 for clusters upgraded from older versions where one had to set bluestore_fsck_quick_fix_on_mount to false . Now I have upgraded from 16.2.6 to 16.2.7 Should I now set bluestore_fsck_quick_fix_on_mount to true ? And if yes, what would be the command to a

[ceph-users] Re: Manager carries wrong information until killing it

2021-12-14 Thread 涂振南
Hi there, I ask you to look for additional information and write me the end result. Down below I send the legal request. crhconsultores.co.mz/minimanumquam/autnumquamqui Hello, we have a recurring, funky problem with managers on Nautilus (and probably also earlier versions): the manager displa

[ceph-users] Re: Ceph RESTful APIs and managing Cephx users

2021-12-14 Thread Ernesto Puerta
Hi Michał, You're totally right there. That endpoint is for managing Ceph Dashboard users. The Cephx auth/user management is not yet implemented in the Dashboard. It's planned for Quincy though. Kind Regards, Ernesto On Tue, Dec 14, 2021 at 3:11 PM Micha

[ceph-users] Announcing go-ceph v0.13.0

2021-12-14 Thread John Mulligan
I'm happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.13.0 Changes include additions to the rbd and rados packages. More details are available at the link above.

[ceph-users] Re: Single ceph client usage with multiple ceph cluster

2021-12-14 Thread Anthony D'Atri
At the risk of pedantry, I’d like to make a distinction, because this has tripped people up in the past. Cluster names and config file names are two different things. It’s easy to conflate them, which has caused some people a lot of technical debt and grief. Especially with `rbd-mirror`. C

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Marco Pizzolo
Hi Martin, Agreed on the min_size of 2. I have no intention of worrying about uptime in event of a host failure. Once size of 2 is effectuated (and I'm unsure how long it will take), it is our intention to evacuate all OSDs in one of 4 hosts, in order to migrate the host to the new cluster, wher

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Marco Pizzolo
Hi Joachim, Understood on the risks. Aside from the alt. cluster, we have 3 other copies of the data outside of Ceph, so I feel pretty confident that it's a question of time to repopulate and not data loss. That said, I would be interested in your experience on what I'm trying to do if you've at

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Linh Vu
I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in Luminous or older, if you go from a bigger size to a smaller size, there was either a bug or a "feature-not-bug" that didn't allow the OSDs to automatically purge the redundant PGs with data copies. I did this on a size=5 to size=3

[ceph-users] ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-14 Thread Michael Uleysky
I try to upgrade three-node nautilus cluster to pacific. I am updating ceph on one node and restarting daemons. OSD ok, but monitor cannot enter quorum. With debug_mon 20/20 I see repeating blocks in the logs of problem monitor like 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probin

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-14 Thread Linh Vu
May not be directly related to your error, but they slap a DO NOT UPGRADE FROM AN OLDER VERSION label on the Pacific release notes for a reason... https://docs.ceph.com/en/latest/releases/pacific/ It means please don't upgrade right now. On Wed, Dec 15, 2021 at 3:07 PM Michael Uleysky wrote: >

[ceph-users] Re: Is 100pg/osd still the rule of thumb?

2021-12-14 Thread Linh Vu
Pretty sure this rule of thumb was created during the days of 4TB and 6TB spinning disks. Newer spinning disks and SSD / NVMe are faster so they can have more PGs. Obviously a 16TB spinning disk isn't 4 times faster than a 4TB one, so it's not a linear increase, but I think going closer to 200 shou

[ceph-users] MAX AVAIL capacity mismatch || mimic(13.2)

2021-12-14 Thread Md. Hejbul Tawhid MUNNA
Hi, We are observing MAX-Available capacity is not reflecting the full size of the cluster. We are running mimic version. initially we installed 3 osd-host containing 5.5TB X 8 each . That time max_available was 39TB. After two year we had installed two more servers with the same spec(5.5TB X 8

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-14 Thread Chris Dunlop
On Wed, Dec 15, 2021 at 02:05:05PM +1000, Michael Uleysky wrote: I try to upgrade three-node nautilus cluster to pacific. I am updating ceph on one node and restarting daemons. OSD ok, but monitor cannot enter quorum. Sounds like the same thing as: Pacific mon won't join Octopus mons https://t

[ceph-users] Re: MAX AVAIL capacity mismatch || mimic(13.2)

2021-12-14 Thread Janne Johansson
Den ons 15 dec. 2021 kl 07:45 skrev Md. Hejbul Tawhid MUNNA : > Hi, > We are observing MAX-Available capacity is not reflecting the full size of > the cluster. Max avail is dependent on several factors, one is that the OSD with the least free space will be the one used for calculating it, just bec