[ceph-users] Constant Compaction on one mimic node

2019-03-17 Thread Alex Litvak
Hello everyone, I am getting a huge number of messages on one out of three nodes showing Manual compaction starting all the time. I see no such of log entries on the other nodes in the cluster. Mar 16 06:40:11 storage1n1-chi docker[24502]: debug 2019-03-16 06:40:11.441 7f6967af4700 4 rocksdb

[ceph-users] How to lower log verbosity

2019-03-17 Thread Alex Litvak
Hello everyone, As I am troubleshooting an issue I see logs literally littered with messages such as below. I searched documentation and couldn't find a specific debug nob to turn. I see some debugging is on by default but I don't need to see staff below especially mgr and client repeating. A

Re: [ceph-users] Constant Compaction on one mimic node

2019-03-17 Thread Alex Litvak
I did some additional cleanup and restarted mon on all nodes. Manual compaction is now shown on all nodes. Is this normal operating mode? As it seems to be sporadic, could it have an effect on performance i.e. cause slow ops. Is there a way to limit it and is there a document that explains tho

[ceph-users] Cephfs error

2019-03-17 Thread Marc Roos
2019-03-17 21:59:58.296394 7f97cbbe6700 0 -- 192.168.10.203:6800/1614422834 >> 192.168.10.43:0/1827964483 conn(0x55ba9614d000 :6800 s=STATE_OPEN pgs=8 cs=1 l=0).fault server, going to standby What does this mean? ___ ceph-users mailing list cep

Re: [ceph-users] How to lower log verbosity

2019-03-17 Thread Marc Roos
I am not sure if it is any help but this is getting you some debug settings ceph daemon osd.0 config show| grep debug | grep "[0-9]/[0-9]" And eg. with such a loop you can set them to 0/0 logarr[debug_compressor]="1/5" logarr[debug_bluestore]="1/5" logarr[debug_bluefs]="1/5" logarr[debug_bdev

Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

2019-03-17 Thread Konstantin Shalygin
Yes, I was in a similar situation initially where I had deployed my OSD's with 25GB DB partitions and after 3GB DB used, everything else was going into slowDB on disk. From memory 29GB was just enough to make the DB fit on flash, but 30GB is a safe round figure to aim for. With a 30GB DB partit

[ceph-users] Rebuild after upgrade

2019-03-17 Thread Brent Kennedy
I finally received approval to upgrade our old firefly(0.8.7) cluster to Luminous. I started the upgrade, upgrading to hammer(0.94.10), then jewel(10.2.11), but after jewel, I ran the "ceph osd crush tunables optimal" command, then "ceph -s" command showed 60% of the objects were misplaced. Now th

Re: [ceph-users] Rebuild after upgrade

2019-03-17 Thread Hector Martin
On 18/03/2019 13:24, Brent Kennedy wrote: I finally received approval to upgrade our old firefly(0.8.7) cluster to Luminous.  I started the upgrade, upgrading to hammer(0.94.10), then jewel(10.2.11), but after jewel, I ran the “ceph osd crush tunables optimal” command, then “ceph –s” command sh

Re: [ceph-users] Constant Compaction on one mimic node

2019-03-17 Thread Konstantin Shalygin
I am getting a huge number of messages on one out of three nodes showing Manual compaction starting all the time. I see no such of log entries on the other nodes in the cluster. Mar 16 06:40:11 storage1n1-chi docker[24502]: debug 2019-03-16 06:40:11.441 7f6967af4700 4 rocksdb: [/home/jenkins