I have only this in the default section, I think it is related to not 
having any configuration for some of these osd's. I 'forgot' to add the 
most recently added node [osd.x] sections. But in any case nothing afaik 
that should have them behave differently.

[osd]
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 8
osd pool default pgp num = 8
# osd objectstore = bluestore
# osd max object size = 134217728
# osd max object size = 26843545600
osd scrub min interval = 172800

And these in the custom section

[osd.x]
public addr = 192.168.10.x
cluster addr = 10.0.0.x





-----Original Message-----
From: David Turner [mailto:drakonst...@gmail.com] 
Sent: 22 April 2019 22:34
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Osd update from 12.2.11 to 12.2.12

Do you perhaps have anything in the ceph.conf files on the servers with 
those OSDs that would attempt to tell the daemon that they are filestore 
osds instead of bluestore?  I'm sure you know that the second part [1] 
of the output in both cases only shows up after an OSD has been 
rebooted.  I'm sure this too could be cleaned up by adding that line to 
the ceph.conf file.

[1] rocksdb_separate_wal_dir = 'false' (not observed, change may require 
restart)

On Sun, Apr 21, 2019 at 8:32 AM  wrote:




        Just updated luminous, and setting max_scrubs value back. Why do I 
get 
        osd's reporting differently 
        
        
        I get these:
        osd.18: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        osd_objectstore = 'bluestore' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.19: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        osd_objectstore = 'bluestore' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.20: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        osd_objectstore = 'bluestore' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.21: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        osd_objectstore = 'bluestore' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.22: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        osd_objectstore = 'bluestore' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        
        
        And I get osd's reporting like this:
        osd.23: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.24: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.25: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.26: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.27: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        osd.28: osd_max_scrubs = '1' (not observed, change may require 
restart) 
        rocksdb_separate_wal_dir = 'false' (not observed, change may 
require 
        restart)
        
        
        
        
        
        
        
        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
        


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to