Thomas,
I have a master branch version of the code to test. The nautilus
backport https://github.com/ceph/ceph/pull/31956 should be the same.
Using your OSDMap, the code in master branch and some additional changes
to osdmaptool I was able to balance your cluster. The osdmaptool
changes
Philippe,
I have a master branch version of the code to test. The nautilus
backport https://github.com/ceph/ceph/pull/31956 should be the same.
Using your OSDMap, the code in master branch and some additional changes
to osdmaptool I was able to balance your cluster. The osdmaptool
changes
Hi,
Trying to create a new OSD following the instructions available at
https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
On step 3 I'm instructed to run "ceph-osd -i {osd-num} --mkfs
--mkkey". Unfortunately it doesn't work:
# ceph-osd -i 3 --mkfs --mkkey
2019-12-11 16:59:58.257
Excellent sleuthing.
I was able to change the key on my Zabbix server instance and all is happy in
Ceph again.
Thanks,
Reed
> On Dec 11, 2019, at 10:24 AM, Gary Molenkamp wrote:
>
> I dislike replying to my own post, but I found the issue:
>
> Looking at the changelog for 14.2.5, the zabbix
Hi Igor,
you're right indeed, the db volume is 100G in size for the hdd osds.
Knowing this, the actual raw use is 783G - 7x100G = 83G which is pretty close
to the sum of the files in the HDD pools times the pool size which is roughly
25x3=75G.
Thanks a lot for your explanation of this tiny but
I dislike replying to my own post, but I found the issue:
Looking at the changelog for 14.2.5, the zabbix key
ceph.num_pg_wait_backfill has been renamed to ceph.num_pg_backfill_wait.
This needs to be updated in the zabbix_template.yml
Before the change:
# /usr/bin/zabbix_sender -z controller03
Piggybacking on this thread to say that I too am having the same behavior.
Ubuntu 18.04
zabbix-sender 4.4.3-1+bionic
ceph-mgr14.2.5-1bionic
I am still getting all metrics in my Zabbix host, it is just the error being
thrown by the ceph-mgr.
Reed
> On Dec 11, 2
After updating/restarting the manager to v14.2.5 we are no longer able
to send data to our zabbix servers.
Ceph reports a non-zero exit status from zabbix_sender, but I have not
been able to identify the cause of the non-zero exit.
# ceph health detail
HEALTH_WARN Failed to send data to Zabbix
Hi Georg,
I suspect your db device size is around 100GiB size? And actual total
hdd class size is rather 700 GiB (100 GiB * 7 osds) less than reported
19 TiB.
Is the above correct? If so then high raw size(s) is caused by osd stats
reporting design - it unconditionally includes full db volu
Hi David,
thanks for the link, I understand the problem now.
I don't think this can be solved within ceph alone in a good way. There are
other storage system that also use file system meta objects like .snap dirs and
automatic xattribs for administrative purposes. All these are non-copy objects
10 matches
Mail list logo