[ceph-users] Re: RGW Service SSL HAProxy.cfg

2023-02-17 Thread Jimmy Spets
The config file for HAProxy is generated by Ceph and I think it should include "ssl verify none" on each backed line as the config use plain ip:port notation. What I wonder is if my yaml config for the RGW and Ingress miss something or if it is a bug in the HAProxy config file generator.

[ceph-users] Re: RGW Service SSL HAProxy.cfg

2023-02-16 Thread Jimmy Spets
I forget to add that the Ceph version is 17.2.5 managed with cephadm. /Jimmy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] RGW Service SSL HAProxy.cfg

2023-02-16 Thread Jimmy Spets
Hi I am trying to setup the “High availability service for RGW” using SSL both to the HAProxy and from the HAProxy to the RGW backend. The SSL certificate gets applied to both HAProxy and the RGW. If I use the RGW instances directly they work as expected. The RGW config is as follows: servic

[ceph-users] Re: Upgrade/migrate host operating system for ceph nodes (CentOS/Rocky)

2022-11-04 Thread Jimmy Spets
I have upgraded the majority of the nodes in a cluster that I manage from CentOS 8.6 to AlmaLinux 9. We have done the upgrade by emptying one node at a time and then reinstalling and bringing it back into the cluster. With AlmaLinux 9 I install the default "Server without GUI" packages and run wi

[ceph-users] Re: Quincy recovery load

2022-07-06 Thread Jimmy Spets
system, idle, iowait do you see? > > > > On Jul 6, 2022, at 5:32 AM, Jimmy Spets wrote: > > > > Hi all > > > > > > > > I have a 10 node cluster with fairly modest hardware (6 HDD, 1 shared > NVME for DB on each) on the nodes that I use for archival.

[ceph-users] Re: Quincy recovery load

2022-07-06 Thread Jimmy Spets
Thanks for your reply. What I meant with high load was load as seen by the top command, all the servers have load average over 10. I added one more noode to add more space. This is what I get from ceph status: cluster: id: health: HEALTH_WARN 2 failed cephadm daemon(s

[ceph-users] Quincy recovery load

2022-07-06 Thread Jimmy Spets
Hi all I have a 10 node cluster with fairly modest hardware (6 HDD, 1 shared NVME for DB on each) on the nodes that I use for archival.After upgrading to Quincy I noticed that load avg on my servers is very high during recovery or rebalance.Changing the OSD recovery priority does not work, I assume

[ceph-users] Re: MDS upgrade to Quincy

2022-04-22 Thread Jimmy Spets
Does cephadm automatically reduce ranks to 1 or does that have to be done manually? /Jimmy On Thu, Apr 21, 2022 at 3:30 PM Patrick Donnelly wrote: > On Wed, Apr 20, 2022 at 8:29 AM Chris Palmer > wrote: > > > > The Quincy release notes state that "MDS upgrades no longer require all > > standby

[ceph-users] Replace HDD with cephadm

2022-03-10 Thread Jimmy Spets
Hello I have a Ceph Pacific cluster managed by cephadm. The nodes have six HDD:s and one NVME that is shared between the six HDD:s. The OSD spec file looks like this: service_type: osdservice_id: osd_spec_defaultplacement:  host_pattern: '*'data_devices:  rotational: 1db_devices:  rotational: 0  si

[ceph-users] Cephadm cluster network

2020-06-07 Thread jimmy . spets
I am new to Ceph so I hope this is not a question of me not reading the documentation well enough. I have setup a small cluster to learn with three physical hosts each with two nics. The cluster is upp and running but I have not figured out how to tie the OSD:s to my second interface for a sep