This is what leads me to believe it's other settings being referred to as
well:
https://ceph.com/community/new-luminous-rados-improvements/

*"There are dozens of documents floating around with long lists of Ceph
configurables that have been tuned for optimal performance on specific
hardware or for specific workloads.  In most cases these ceph.conf
fragments tend to induce funny looks on developers’ faces because the
settings being adjusted seem counter-intuitive, unrelated to the
performance of the system, and/or outright dangerous.  Our goal is to make
Ceph work as well as we can out of the box without requiring any tuning at
all, so we are always striving to choose sane defaults.  And generally, we
discourage tuning by users. "*

To me it's not just bluestore settings / sdd vs. hdd they're talking about
("dozens of documents floating around"... "our goal... without any tuning
at all".  Am I off base?

 Regards

On Thu, Jul 12, 2018 at 9:12 PM, Konstantin Shalygin <k0...@k0ste.ru> wrote:

>   I saw this in the Luminous release notes:
>>
>>   "Each OSD now adjusts its default configuration based on whether the
>> backing device is an HDD or SSD. Manual tuning generally not required"
>>
>>   Which tuning in particular?  The ones in my configuration are
>> osd_op_threads, osd_disk_threads, osd_recovery_max_active,
>> osd_op_thread_suicide_timeout, and osd_crush_chooseleaf_type, among
>> others.  Can I rip these out when I upgrade to
>> Luminous?
>>
>
> This mean that some "bluestore_*" settings tuned for nvme/hdd separately.
>
> Also with Luminous we have:
>
> osd_op_num_shards_(ssd|hdd)
>
> osd_op_num_threads_per_shard_(ssd|hdd)
>
> osd_recovery_sleep_(ssd|hdd)
>
>
>
>
> k
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to