Common motivations for this strategy include the lure of unit economics and
RUs.
Often ultra dense servers can’t fill racks anyway due to power and weight
limits.
Here the osd_memory_target would have to be severely reduced to avoid
oomkilling. Assuming the OSDs are top load LFF HDDs with e
I was thinking the same thing. Very small OSDs can behave unexpectedly because
of the relatively high percentage of overhead.
> On Nov 18, 2023, at 3:08 AM, Eugen Block wrote:
>
> Do you have a large block.db size defined in the ceph.conf (or config store)?
>
> Zitat von Debian :
>
>> th
Hello Albert,
5 vs 3 MON => you won't notice any difference
5 vs 3 MGR => by default, only 1 will be active
Le sam. 18 nov. 2023 à 09:28, Albert Shih a écrit :
> Le 17/11/2023 à 11:23:49+0100, David C. a écrit
>
> Hi,
>
> >
> > 5 instead of 3 mon will allow you to limit the impact if you break
Le 17/11/2023 à 11:23:49+0100, David C. a écrit
Hi,
>
> 5 instead of 3 mon will allow you to limit the impact if you break a mon (for
> example, with the file system full)
>
> 5 instead of 3 MDS, this makes sense if the workload can be distributed over
> several trees in your file system. Some
Le 18/11/2023 à 02:31:22+0100, Simon Kepp a écrit
Hi,
> I know that your question is regarding the service servers, but may I ask, why
> you are planning to place so many OSDs ( 300) on so few OSD hosts( 6) (= 50
> OSDs per node)?
> This is possible to do, but sounds like the nodes were designed
Do you have a large block.db size defined in the ceph.conf (or config store)?
Zitat von Debian :
thx for your reply, it shows nothing,... there are no pgs on the osd,...
best regards
On 17.11.23 23:09, Eugen Block wrote:
After you create the OSD, run ‚ceph pg ls-by-osd {OSD}‘, it should
sho