Has anyone adapted their haproxy config for rewriting new s3 subdomain urls(?)
to old path?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I expanded MON from 1 to 3 by updating orch service "ceph orch apply".
"mon_host" in all services (MON, MGR, OSDs) is not updated. It's still single
host from source "file".
What's the guidance here to update "mon_host" for all services? I am talking
about Ceph services, not client side.
Should I u
# ceph config set osd.0 mon_host
[v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0,v2:10.250.50.81:3300/0,v1:10.250.50.81:6789/0,v2:10.250.50.82:3300/0,v1:10.250.50.82:6789/0]
Error EINVAL: mon_host is special and cannot be stored by the mon
It seems that the only option is to update ceph.conf and r
Just realized that all config files (/var/lib/ceph///config)
on all nodes are already updated properly. It must be handled as part of adding
MONs. But "ceph config show" shows only single host.
mon_host [v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0]
file
That means I stil
Hi,
Here is a snippet from top on a node with 10 OSDs.
===
MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache
MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem
PID USER PR NIVIRTRESSHR S %CPU %MEM
Once you have your additional 5 nodes you can adjust your crushrule to have
failure domain = host and ceph will rebalance the data automatically for
you. This will involve quite a bit of data movement (at least 50% of your
data will need to be migrated) so can take some time. Also the official
reco
To clarify, to avoid PG log taking too much memory, I already set
osd_max_pg_log_entries from default 1 to 1000.
I checked PG log size. They are all under 1100.
ceph pg dump -f json | jq '.pg_map.pg_stats[]' | grep ondisk_log_size
I also checked eash OSD. The total is only a few hundreds MB.
c
Restarting OSD frees buff/cache memory.
What kind of data is there?
Is there any configuration to control this memory allocation?
Thanks!
Tony
From: Tony Liu
Sent: March 27, 2021 06:10 PM
To: ceph-users
Subject: [ceph-users] Re: memory consumption by osd
Depending on your kernel version, MemFree can be misleading. Attend to the
value of MemAvailable instead.
Your OSDs all look to be well below the target, I wouldn’t think you have any
problems. In fact 256GB for just 10 OSDs is an embarassment of riches. What
type of drives are you using, a
I don't see any problems yet. All OSDs are working fine.
Just that 1.8GB free memory concerns me.
I know 256GB memory for 10 OSDs (16TB HDD) is a lot, I am planning to
reduce it or increate osd_memory_target (if that's what you meant) to
boost performance. But before doing that, I'd like to underst
10 matches
Mail list logo