Something like this works…
# HAProxy configuration
#--
# Global settings
#--
global
log /dev/loglocal0
log /dev/loglocal1 notice
user haproxy
group haproxy
chroot /var/lib/haproxy
daemon
stats socket /var/lib/haproxy/stats mode
I would say production should have 5 MON servers
From: huxia...@horebdata.cn
Date: Friday, February 12, 2021 at 7:59 AM
To: Marc , Michal Strnad ,
ceph-users
Subject: [ceph-users] Re: Backups of monitor
Normally any production Ceph cluster will have at least 3 MONs, does it reall
need a backup
5 to be able to do
maintenance. (2 out of 3, 3 out of 5… ) So if you are doing maintenance on a
mon host in a 5 mon cluster you will still have 3 in the quorum.
From: huxia...@horebdata.cn
Date: Friday, February 12, 2021 at 8:42 AM
To: Freddy Andersen , Marc ,
Michal Strnad , ceph-users
You need to enable users with tenants …
https://docs.ceph.com/en/latest/radosgw/multitenancy/
From: Simon Pierre DESROSIERS
Date: Monday, February 22, 2021 at 7:27 AM
To: ceph-users@ceph.io
Subject: [ceph-users] multiple-domain for S3 on rgws with same ceph backend on
one zone
Hello,
We have
I would use croit
From: Drew Weaver
Date: Wednesday, March 3, 2021 at 7:45 AM
To: 'ceph-users@ceph.io'
Subject: [ceph-users] Questions RE: Ceph/CentOS/IBM
Howdy,
After the IBM acquisition of RedHat the landscape for CentOS quickly changed.
As I understand it right now Ceph 14 is the last versi
1. Do not use raid for osd disks... 1 ods per disk
2-3. I would have 3 or more osd nodes... more is better for when you have
issues or need maintenance. We use vms for mon nodes with mgr on each mon node.
5 is the recommended for a production cluster but you can be ok with 3 for a
small cluster