[ceph-users] 答复: mon is stuck in leveldb and costs nearly 100% cpu

2017-02-12 Thread Chenyehua
My ceph version is 10.2.5 -邮件原件- 发件人: Shinobu Kinjo [mailto:ski...@redhat.com] 发送时间: 2017年2月12日 13:12 收件人: chenyehua 11692 (RD) 抄送: kc...@redhat.com; ceph-users@lists.ceph.com 主题: Re: [ceph-users] mon is stuck in leveldb and costs nearly 100% cpu Which Ceph version are you using? On Sat

[ceph-users] 答复: mon is stuck in leveldb and costs nearly 100% cpu

2017-02-12 Thread Chenyehua
Sorry, I made a mistake, the ceph version is actually 0.94.5 -邮件原件- 发件人: chenyehua 11692 (RD) 发送时间: 2017年2月13日 9:40 收件人: 'Shinobu Kinjo' 抄送: kc...@redhat.com; ceph-users@lists.ceph.com 主题: 答复: [ceph-users] mon is stuck in leveldb and costs nearly 100% cpu My ceph version is 10.2.5 -

Re: [ceph-users] 答复: mon is stuck in leveldb and costs nearly 100% cpu

2017-02-12 Thread Shinobu Kinjo
O.k, that's reasonable answer. Would you do on all hosts which the MON are running on: #* ceph --admin-daemon /var/run/ceph/ceph-mon.`hostname -s`.asok config show | grep leveldb_log Anyway you can compact leveldb size with at runtime: #* ceph tell mon.`hostname -s` compact And you should set

[ceph-users] Anyone using LVM or ZFS RAID1 for boot drives?

2017-02-12 Thread Alex Gorbachev
Hello, with the preference for IT mode HBAs for OSDs and journals, what redundancy method do you guys use for the boot drives. Some options beyond RAID1 at hardware level we can think of: - LVM - ZFS RAID1 mode - SATADOM with dual drives - Single SSD like the journal drives, since they'd fail

Re: [ceph-users] Anyone using LVM or ZFS RAID1 for boot drives?

2017-02-12 Thread Christian Balzer
Hello, On Sun, 12 Feb 2017 22:22:30 -0500 Alex Gorbachev wrote: > Hello, with the preference for IT mode HBAs for OSDs and journals, > what redundancy method do you guys use for the boot drives. Some > options beyond RAID1 at hardware level we can think of: > Not really that Ceph specific, but.

[ceph-users] OSDs cannot match up with fast OSD map changes (epochs) during recovery

2017-02-12 Thread Andreas Gerstmayr
Hi, Due to a faulty upgrade from Jewel 10.2.0 to Kraken 11.2.0 our test cluster is unhealthy since about two weeks and can't recover itself anymore (unfortunately I skipped the upgrade to 10.2.5 because I missed the ".z" in "All clusters must first be upgraded to Jewel 10.2.z"). Immediately after

[ceph-users] Why does ceph-client.admin.asok disappear after some running time?

2017-02-12 Thread 许雪寒
Hi, everyone. I’m doing some stress test with ceph, librbd and fio. During the test, I want to “perf dump” the cliet’s perf data. However, each time I tried to do “perf dump” on the client, the “aosk” file of librbd had disappeared. I’m sure that at the beginning of the running of fio, client’s