I have same issue. My cluster runing 14.2.11 versions. What is your version
ceph?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all.
My cluster has many log mon scrub mon data
2020-08-20 13:12:16.393 7fe89becc700 0 log_channel(cluster) log [DBG] : scrub
ok on 0,1,2,3: ScrubResult(keys {auth=100} crc {auth=3066031631})
2020-08-20 13:12:16.395 7fe89becc700 0 log_channel(cluster) log [DBG] : scrub
ok on 0,1,2,3: Scrub
Hi all.
I have a cluster 14.2.10 versions.
First, I create user hoannv
radosgw-admin user create --uid=hoannv --display-name=hoannv
Then create subuser hoannv:subuser1 by command :
radosgw-admin subuser create --uid=hoannv --subuser=subuser1 --key-type=swift
--gen-secret --access=full
hoann
You should check disk on server. Some disks may be have bad sectors.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all.
My cluster has many osds down, 1 mon log has many line :
2020-06-15 18:00:22.575 7fa2deffe700 0 log_channel(cluster) log [DBG] : scrub
ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2176495218})
2020-06-15 18:00:22.661 7fa2deffe700 0 log_channel(cluster) log [DBG] : scr
My cluster on 15.2.1 versions.
I use ceph-volume raw to add new osd but it has error :
ceph-volume raw prepare --bluestore --data /dev/vdb
Running command: /usr/bin/ceph-authtool --gen-print-key
Running co
I disabled some module in mgr : influx, dashboard, prometheus.
After i restart mgr, ram of mgr increase to 20GB in some seconds.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all.
My cluster in ceph version 14.2.6
Mgr process in top
3104786 ceph 20 0 20.2g 19.4g 18696 S 315.3 62.0 41:32.74 ceph-mgr