[ceph-users] Re: PSA: upgrading older clusters without CephFS

2021-08-16 Thread Patrick Donnelly
Hi Alexandre, On Thu, Aug 12, 2021 at 2:00 PM Alexandre Marangone wrote: > > This part confuses me a bit "If your cluster has not used CephFS since > before the Jewel release" > Can you clarify whether this applies to clusters deployed before Jewel > or any cluster deployed until now that has not

[ceph-users] Re: PSA: upgrading older clusters without CephFS

2021-08-16 Thread Patrick Donnelly
Hi Dongdong, On Sun, Aug 8, 2021 at 10:08 PM 陶冬冬 wrote: > > Hi Patrick, > > Thanks a lot for letting us know about this issue! > > By reading your fix[1] carefully, I understand the heart of this issue is > that: > Since Jewel, CephFS introduced a new data structure FSMap (for MultiFS), and > t

[ceph-users] Re: How to safely turn off a ceph cluster

2021-08-16 Thread Kobi Ginon
it is better to stop the clients who writes to ceph cluster prior to turning of the cluster i might be stating the obvious if this is openstack cluster who uses the ceph cluster -> Shif the client to write to replicated cluster , or divert the traffic to a replicated site. the same for k8s cluster

[ceph-users] Dashboard no longer listening on all interfaces after upgrade to 16.2.5

2021-08-16 Thread Oliver Weinmann
Dear All, after a very smooth upgrade from 16.2.4 to 16.2.5 (CentOS 8 Stream), we are no longer able to access the dashboard. The dashboard was accessible before the upgrade. I googled and found a command to change the listening IP for the dashboard, but I wonder why the upgrade should have

[ceph-users] RGW Swift & multi-site

2021-08-16 Thread Matthew Vernon
Hi, Are there any issues to be aware of when using RGW's newer multi-site features with the Swift front-end? I've, perhaps unfairly, gathered the impression that the Swift support in RGW gets less love than S3... Thanks, Matthew ps: new email address, as I've moved employer

[ceph-users] SSD disk for OSD detected as type HDD

2021-08-16 Thread mabi
Hello, I noticed that cephadm detects my newly added SSD disk as type HDD as you can see below: $ ceph orch device ls Hostname Path Type Serial Size Health Ident Fault Available node1 /dev/sda hdd REMOVED 7681G Unknown N/AN/ANo How can I force the

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread Alexander Sporleder
Thanks. I found that in the release notes of 14.2.22: "This release sets bluefs_buffered_io to true by default to improve performance for metadata heavy workloads. Enabling this option has been reported to occasionally cause excessive kernel swapping under certain workloads. Currently, the most

[ceph-users] Re: Discard / Trim does not shrink rbd image size when disk is partitioned

2021-08-16 Thread Ilya Dryomov
On Fri, Aug 13, 2021 at 9:45 AM Boris Behrens wrote: > > Hi Janne, > thanks for the hint. I was aware of that, but it is goot to add that > knowledge to the question for further googlesearcher. > > Hi Ilya, > that fixed it. Do we know why the discard does not work when the partition > table is not

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread i...@z1storage.com
Hi, Global swappiness and per-cgroup swappiness are managed separately. When you change vm.swappiness sysctl, only /sys/fs/cgroup/memory/memory.swappiness changes, but not memory.swappiness of the services under separate slices (like system.slice where ceph services are running). Check https:

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
Found some option that seemed to cause some trouble in the past, `bluefs_buffered_io`, it has been disabled/enabled by default a couple times (disabled on v15.2.2, enabled on v15.2.13), it seems it might have a big effect on performance and swapping behavior, but might be a lead. On 08/16 14:10

[ceph-users] Re: Deployment of Monitors and Managers

2021-08-16 Thread Konstantin Shalygin
Hi > On 14 Aug 2021, at 11:06, Michel Niyoyita > wrote: > > I am going to deploy ceph in production , and I am going to deploy 3 > monitors on 3 differents hosts to make a quorum. Is there any > inconvenience if I deploy 2 managers on the same hosts where I deployed >

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread Alexander Sporleder
Hello David, Unfortunately "vm.swapiness" dose not change the behavior. Tweaks on the container side (--memory-swappiness and -- memory-swap) might make sens but I did not found any Ceph related suggestion. Am Montag, dem 16.08.2021 um 13:52 +0200 schrieb David Caro: > Afaik the swapping beha

[ceph-users] Re: OSD swapping on Pacific

2021-08-16 Thread David Caro
Afaik the swapping behavior is controlled by the kernel, there might be some tweaks on the container engine side, but you might want to try to tweak the default behavior by lowering the 'vm.swapiness' of the kernel: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/pe

[ceph-users] OSD swapping on Pacific

2021-08-16 Thread Alexander Sporleder
Hello list! We have a containerized Pacific (16.2.5) Cluster running CentOS 8.4 and after a few weeks the OSDs start to use swap quite a lot despite free memory. The host has 196 GB of memory and 24 OSDs. "OSD Memory Target" is set to 6 GB.  $ cat /proc/meminfo MemTotal: 196426616 kB M

[ceph-users] SSE-C

2021-08-16 Thread Jayanth Babu A
, 'client_region': 'us-east-1'}} 2021-08-16 12:19:11,196 - Thread-3 - botocore.hooks - DEBUG - Event request-created.s3.PutObject: calling handler 2021-08-16 12:19:11,197 - Thread-3 - botocore.hooks - DEBUG - Event request-created.s3.PutObject: calling handler > 2021-08-16 1

[ceph-users] Re: SSD disk for OSD detected as type HDD

2021-08-16 Thread Etienne Menguy
Hi, Changing device class works? https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes ceph osd crush set-device-class [...] Étienne > On 16 Aug 2021, at 12:05, mabi wrote: > > Hello, > > I

[ceph-users] Re: Multiple DNS names for RGW?

2021-08-16 Thread Chris Palmer
It's straightforward to add multiple DNS names to an endpoint. We do this for the sort of reasons you suggest. You then don't need separate rgw instances (not for this reason anyway). Assuming default: * radosgw-admin zonegroup get > zg-default * Edit zg-default, changing "hostnames" to e.g.

[ceph-users] Re: Multiple DNS names for RGW?

2021-08-16 Thread Gabriel Tzagkarakis
hello, we are already doing something similar, having multiple hostnames publicly using just one backend hostname. Since you are using haproxy, in your haproxy backend section you could enforce a single hostname by setting the header like this http-request set-header Host example.com i hope this

[ceph-users] Re: Multiple DNS names for RGW?

2021-08-16 Thread Janne Johansson
Den mån 16 aug. 2021 kl 08:53 skrev Burkhard Linke : > Hi, > we are running RGW behind haproxy for TLS termination and load > balancing. Due to some major changes in our setup, we would like to > start a smooth transition to a new hostname of the S3 endpoint. The > haproxy part should be straightfo