[ceph-users] Re: cephadm host maintenance

2022-07-13 Thread Adam King
Hello Steven, Arguably, it should, but right now nothing is implemented to do so and you'd have to manually run the "ceph mgr fail node2-cobj2-atdev1-nvan.ghxlvw" before it would allow you to put the host in maintenance. It's non-trivial from a technical point of view to have it automatically do t

[ceph-users] Re: cephadm host maintenance

2022-07-13 Thread Robert Gallop
This brings up a good follow on…. Rebooting in general for OS patching. I have not been leveraging the maintenance mode function, as I found it was really no different than just setting noout and doing the reboot. I find if the box is the active manager the failover happens quick, painless and au

[ceph-users] Radosgw issues after upgrade to 14.2.21

2022-07-13 Thread richard.andr...@centro.net
Hello, We recently upgraded a 3 node cluster running luminous 12.2.13(ceph repos) on Debian 9 to Nautilus v14.2.21(Debian stable repo) on Debian 11. For the most part everything seems to be fine with the exception of access to the bucket defined inside of RadosGW. Since the upgrade users are

[ceph-users] rbd iostat requires pool specified

2022-07-13 Thread Reed Dier
Hoping this may be trivial to point me towards, but I typically keep a background screen running `rbd perf image iostat` that shows all of the rbd devices with io, and how busy that disk may be at any given moment. Recently after upgrading everything to latest octopus release (15.2.16), it no l

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-13 Thread Zakhar Kirpichenko
Hi! My apologies for butting in. Please confirm that bluestore_prefer_deferred_size_hdd is a runtime option, which doesn't require OSDs to be stopped or rebuilt? Best regards, Zakhar On Tue, 12 Jul 2022 at 14:46, Dan van der Ster wrote: > Hi Igor, > > Thank you for the reply and information. >

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-13 Thread Dan van der Ster
Yes, that is correct. No need to restart the osds. .. Dan On Thu., Jul. 14, 2022, 07:04 Zakhar Kirpichenko, wrote: > Hi! > > My apologies for butting in. Please confirm > that bluestore_prefer_deferred_size_hdd is a runtime option, which doesn't > require OSDs to be stopped or rebuilt? > > Bes

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-13 Thread Zakhar Kirpichenko
Many thanks, Dan. Much appreciated! /Z On Thu, 14 Jul 2022 at 08:43, Dan van der Ster wrote: > Yes, that is correct. No need to restart the osds. > > .. Dan > > > On Thu., Jul. 14, 2022, 07:04 Zakhar Kirpichenko, > wrote: > >> Hi! >> >> My apologies for butting in. Please confirm >> that blues

[ceph-users] Re: CephFS snapshots with samba shadowcopy

2022-07-13 Thread Sebastian Knust
Hi, I am providing CephFS snapshots via Samba with the shadow_copy2 VFS object. I am running CentOS 7 with smbd 4.10.16 for which ceph_snapshots is not available AFAIK. Snapshots are created by a cronjob above the root of my shares with export TZ=GMT mkdir /cephfs/path/.snap/`date +@GMT-%

[ceph-users] MGR permissions question

2022-07-13 Thread Robert Reihs
Hi, we have discovered this solution for CSI plugin permissions: https://github.com/ceph/ceph-csi/issues/2687#issuecomment-1014360244 We are not sure of the implications of adding the mgr permissions to the (non admin) user. The documentation seems to be sparse on this topic. Is it ok to give a lim

[ceph-users] Re: size=1 min_size=0 any way to set?

2022-07-13 Thread huxia...@horebdata.cn
Just go straightforward to set size=1 and min_size=1. Setting min_size to 0 does not make any sense. huxia...@horebdata.cn From: Szabo, Istvan (Agoda) Date: 2022-07-13 11:38 To: ceph-users@ceph.io Subject: [ceph-users] size=1 min_size=0 any way to set? Hi, Is there a way to set this? Yes I

[ceph-users] Re: size=1 min_size=0 any way to set?

2022-07-13 Thread huxia...@horebdata.cn
As far as I know, one can read and write with it. huxia...@horebdata.cn From: Szabo, Istvan (Agoda) Date: 2022-07-13 11:49 To: huxia...@horebdata.cn CC: ceph-users Subject: RE: [ceph-users] size=1 min_size=0 any way to set? But that one makes the pool read only I guess right? Istvan Szabo Se

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-13 Thread David Orman
Is this something that makes sense to do the 'quick' fix on for the next pacific release to minimize impact to users until the improved iteration can be implemented? On Tue, Jul 12, 2022 at 6:16 AM Igor Fedotov wrote: > Hi Dan, > > I can confirm this is a regression introduced by > https://githu

[ceph-users] cephadm host maintenance

2022-07-13 Thread Steven Goodliff
Hi, I'm trying to reboot a ceph cluster one instance at a time by running in a Ansible playbook which basically runs cephadm shell ceph orch host maintenance enter and then reboots the instance and exits the maintenance but i get ALERT: Cannot stop active Mgr daemon, Please switch act

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-13 Thread Igor Fedotov
May be. My plan is to attempt to make general fix and if this wouldn't work within a short time frame - publish a 'quick' one. On 7/13/2022 4:58 PM, David Orman wrote: Is this something that makes sense to do the 'quick' fix on for the next pacific release to minimize impact to users until the