[ceph-users] haproxy rewrite for s3 subdomain

2021-03-27 Thread Marc
Has anyone adapted their haproxy config for rewriting new s3 subdomain urls(?) to old path? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
I expanded MON from 1 to 3 by updating orch service "ceph orch apply". "mon_host" in all services (MON, MGR, OSDs) is not updated. It's still single host from source "file". What's the guidance here to update "mon_host" for all services? I am talking about Ceph services, not client side. Should I u

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
# ceph config set osd.0 mon_host [v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0,v2:10.250.50.81:3300/0,v1:10.250.50.81:6789/0,v2:10.250.50.82:3300/0,v1:10.250.50.82:6789/0] Error EINVAL: mon_host is special and cannot be stored by the mon It seems that the only option is to update ceph.conf and r

[ceph-users] Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

2021-03-27 Thread Tony Liu
Just realized that all config files (/var/lib/ceph///config) on all nodes are already updated properly. It must be handled as part of adding MONs. But "ceph config show" shows only single host. mon_host [v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0] file That means I stil

[ceph-users] memory consumption by osd

2021-03-27 Thread Tony Liu
Hi, Here is a snippet from top on a node with 10 OSDs. === MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem PID USER PR NIVIRTRESSHR S %CPU %MEM

[ceph-users] Re: Can I create 8+2 Erasure coding pool on 5 node?

2021-03-27 Thread Christian Wuerdig
Once you have your additional 5 nodes you can adjust your crushrule to have failure domain = host and ceph will rebalance the data automatically for you. This will involve quite a bit of data movement (at least 50% of your data will need to be migrated) so can take some time. Also the official reco

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
To clarify, to avoid PG log taking too much memory, I already set osd_max_pg_log_entries from default 1 to 1000. I checked PG log size. They are all under 1100. ceph pg dump -f json | jq '.pg_map.pg_stats[]' | grep ondisk_log_size I also checked eash OSD. The total is only a few hundreds MB. c

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
Restarting OSD frees buff/cache memory. What kind of data is there? Is there any configuration to control this memory allocation? Thanks! Tony From: Tony Liu Sent: March 27, 2021 06:10 PM To: ceph-users Subject: [ceph-users] Re: memory consumption by osd

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Anthony D'Atri
Depending on your kernel version, MemFree can be misleading. Attend to the value of MemAvailable instead. Your OSDs all look to be well below the target, I wouldn’t think you have any problems. In fact 256GB for just 10 OSDs is an embarassment of riches. What type of drives are you using, a

[ceph-users] Re: memory consumption by osd

2021-03-27 Thread Tony Liu
I don't see any problems yet. All OSDs are working fine. Just that 1.8GB free memory concerns me. I know 256GB memory for 10 OSDs (16TB HDD) is a lot, I am planning to reduce it or increate osd_memory_target (if that's what you meant) to boost performance. But before doing that, I'd like to underst