Hello,

On Tue, 19 Jul 2016 12:24:01 +0200 Oliver Dzombic wrote:

> Hi,
> 
> i have in my ceph.conf under [OSD] Section:
> 
> osd_tier_promote_max_bytes_sec = 1610612736
> osd_tier_promote_max_objects_sec = 20000
> 
> #ceph --show-config is showing:
> 
> osd_tier_promote_max_objects_sec = 5242880
> osd_tier_promote_max_bytes_sec = 25
> 
> But in fact its working. Maybe some Bug in showing the correct value.
> 
> I had Problems too, that the IO was going to the cold storage mostly.
> 
> After i changed this values ( and restarted >every< node inside the
> cluster ) the problem was gone.
> 
> So i assume, that its simply showing the wrong values if you call the
> show-config. Or there is some other miracle going on.
> 
> I just checked:
> 
> #ceph --show-config | grep osd_tier
> 
> shows:
> 
> osd_tier_default_cache_hit_set_count = 4
> osd_tier_default_cache_hit_set_period = 1200
> 
> while
> 
> #ceph osd pool get ssd_cache hit_set_count
> #ceph osd pool get ssd_cache hit_set_period
> 
> show
> 
> hit_set_count: 1
> hit_set_period: 120
> 
Apples and oranges.

Your first query is about the config (and thus default, as it says in the
output) options, the second one is for a specific pool.

There might be still any sorts of breakage with show-config and having to
restart OSDs to have changes take effect is inelegant at least, but the
above is not a bug.

Christian

> 
> So you can obviously ignore the ceph --show-config command. Its simply
> not working correctly.
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to