- Le 28 Juin 24, à 15:27, Anthony D'Atri anthony.da...@gmail.com a écrit :
>>>
>>> But this in a spec doesn't match it:
>>>
>>> size: '7000G:'
>>>
>>> This does:
>>>
>>> size: '6950G:'
>
> There definitely is some rounding within Ceph, and base 2 vs base 10
> shenanigans.
>
>>
>> $ ce
>>
>> But this in a spec doesn't match it:
>>
>> size: '7000G:'
>>
>> This does:
>>
>> size: '6950G:'
There definitely is some rounding within Ceph, and base 2 vs base 10
shenanigans.
>
> $ cephadm shell ceph-volume inventory /dev/sdc --format json | jq
> .sys_api.human_readable_size
>
- Le 26 Juin 24, à 10:50, Torkil Svensgaard tor...@drcmr.dk a écrit :
> On 26/06/2024 08:48, Torkil Svensgaard wrote:
>> Hi
>>
>> We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
>> 3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
>> bluestore and t
Hello Torkil,
I didn't want to suggest using multiple OSD services from the start as you were
trying to avoid adding more.
Here, we've been using per hosts (listing hosts and not using wildcard pattern)
OSDs specs, as buying new hardware over time, our cluster became more
heterogeneous than b
On 27-06-2024 10:56, Frédéric Nass wrote:
Hi Torkil, Ruben,
Hi Frédéric
I see two theoretical ways to do this without additional OSD service. One that
probably doesn't work :-) and another one that could work depending on how the
orchestrator prioritize its actions based on services crite
Hi Torkil, Ruben,
I see two theoretical ways to do this without additional OSD service. One that
probably doesn't work :-) and another one that could work depending on how the
orchestrator prioritize its actions based on services criteria.
The one that probably doesn't work is by specifying mul
On 26/06/2024 08:48, Torkil Svensgaard wrote:
Hi
We have a bunch of HDD OSD hosts with DB/WAL on PCI NVMe, either 2 x
3.2TB or 1 x 6.4TB. We used to have 4 SSDs pr node for journals before
bluestore and those have been repurposed for an SSD pool (wear level is
fine).
We've been using the