--- Begin Message ---
On 19 September 2025 14:00:18 CEST, Aaron Lauterer <[email protected]> 
wrote:
>thanks for the patch! see inline for comments
>
>On  2025-09-18  18:45, Alwin Antreich wrote:
>> Since bluestore, OSDs adhere to the osd_memory_target and the
>> recommended amount of memory was increased.
>> 
>> See: https://docs.ceph.com/en/reef/start/hardware-recommendations/#ram
>> 
>> Signed-off-by: Alwin Antreich <[email protected]>
>> ---
>>   pveceph.adoc | 16 ++++++++--------
>>   1 file changed, 8 insertions(+), 8 deletions(-)
>> 
>> diff --git a/pveceph.adoc b/pveceph.adoc
>> index 17efa4d..a2d71e7 100644
>> --- a/pveceph.adoc
>> +++ b/pveceph.adoc
>> @@ -131,14 +131,14 @@ carefully planned out and monitored. In addition to 
>> the predicted memory usage
>>   of virtual machines and containers, you must also account for having enough
>>   memory available for Ceph to provide excellent and stable performance.
>>   -As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will 
>> be used
>> -by an OSD. While the usage might be less under normal conditions, it will 
>> use
>> -most during critical operations like recovery, re-balancing or backfilling.
>> -That means that you should avoid maxing out your available memory already on
>> -normal operation, but rather leave some headroom to cope with outages.
>> -
>> -The OSD service itself will use additional memory. The Ceph BlueStore 
>> backend of
>> -the daemon requires by default **3-5 GiB of memory** (adjustable).
>> +While usage may be less under normal conditions, it will consume more memory
>> +during critical operations, such as recovery, rebalancing, or backfilling. 
>> That
>> +means you should avoid maxing out your available memory already on regular
>> +operation, but rather leave some headroom to cope with outages.
>> +
>> +The current recommendation is to configure at least **8 GiB of memory per 
>> OSD
>> +daemon** for good performance. The OSD daemon requires, by default, 4 GiB of
>> +memory.
>
>given how the current latest Ceph docs phrase it [0], I am not sure here. They 
>sound like the default osd_memory_target of 4G is okay, but that they might 
>use more in recovery situations and one should calculate with ~8G.
>
>So unless I understand that wrong, maybe we could phrase it more like the 
>following?
>===
>The current recommendation is to calculate with at least 8 GiB of memory per 
>OSD daemon to give it enough memory if needed. By default, the OSD daemon is 
>set to use up to 4 GiB of memory in normal scenarios.
>===
>
>If I understand it wrong and users should change the osd_memory_target to 8 
>GiB, we should document how, or maybe even try to make it configurable in the 
>GUI/API/pveceph...

I didn't want to clutter the cluster sizing text with configuration details.

The OSD daemon will adhere to the osd_memory_target , as it isn't a limit, the 
OSD may overshoot by 10-20%, as buffers (probably other things) aren't 
accounted for. Unless auto tuning is enabled, the memory_target should be 
adjusted to 8GiB. The experience we gathered also shows that 8GiB is worth it, 
especially when the cluster is degraded.

See inline



--- End Message ---
_______________________________________________
pve-devel mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to