[ceph-users] Re: Monitors not starting, getting "e3 handle_auth_request failed to assign global_id"

2020-12-08 Thread hoannv46
I have same issue. My cluster runing 14.2.11 versions. What is your version ceph? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Ceph mon crash, many osd down

2020-08-20 Thread hoannv46
Hi all. My cluster has many log mon scrub mon data 2020-08-20 13:12:16.393 7fe89becc700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2,3: ScrubResult(keys {auth=100} crc {auth=3066031631}) 2020-08-20 13:12:16.395 7fe89becc700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2,3: Scrub

[ceph-users] How i can use bucket policy with subuser

2020-08-07 Thread hoannv46
Hi all. I have a cluster 14.2.10 versions. First, I create user hoannv radosgw-admin user create --uid=hoannv --display-name=hoannv Then create subuser hoannv:subuser1 by command : radosgw-admin subuser create --uid=hoannv --subuser=subuser1 --key-type=swift --gen-secret --access=full hoann

[ceph-users] Re: 1 pg inconsistent

2020-07-14 Thread hoannv46
You should check disk on server. Some disks may be have bad sectors. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Many osds down , ceph mon has a lot of scrub logs

2020-06-15 Thread hoannv46
Hi all. My cluster has many osds down, 1 mon log has many line : 2020-06-15 18:00:22.575 7fa2deffe700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2,3,4: ScrubResult(keys {osd_snap=100} crc {osd_snap=2176495218}) 2020-06-15 18:00:22.661 7fa2deffe700 0 log_channel(cluster) log [DBG] : scr

[ceph-users] How to use ceph-volume raw ?

2020-04-13 Thread hoannv46
My cluster on 15.2.1 versions. I use ceph-volume raw to add new osd but it has error : ceph-volume raw prepare --bluestore --data /dev/vdb Running command: /usr/bin/ceph-authtool --gen-print-key Running co

[ceph-users] Re: High memory ceph mgr 14.2.7

2020-03-04 Thread hoannv46
I disabled some module in mgr : influx, dashboard, prometheus. After i restart mgr, ram of mgr increase to 20GB in some seconds. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] High memory ceph mgr 14.2.7

2020-03-04 Thread hoannv46
Hi all. My cluster in ceph version 14.2.6 Mgr process in top 3104786 ceph 20 0 20.2g 19.4g 18696 S 315.3 62.0 41:32.74 ceph-mgr