> On 23.02.24 16:18, Christian Rohmann wrote:
> > I just noticed issues with ceph-crash using the Debian /Ubuntu
> > packages (package: ceph-base):
> >
> > While the /var/lib/ceph/crash/posted folder is created by the package
> > install,
> > it's not properly chowned to ceph:ceph by the postinst s
> On Mar 2, 2024, at 10:37 AM, Erich Weiler wrote:
>
> Hi Y'all,
>
> We have a new ceph cluster online that looks like this:
>
> md-01 : monitor, manager, mds
> md-02 : monitor, manager, mds
> md-03 : monitor, manager
> store-01 : twenty 30TB NVMe OSDs
> store-02 : twenty 30TB NVMe OSDs
>
>
Hi Y'all,
We have a new ceph cluster online that looks like this:
md-01 : monitor, manager, mds
md-02 : monitor, manager, mds
md-03 : monitor, manager
store-01 : twenty 30TB NVMe OSDs
store-02 : twenty 30TB NVMe OSDs
The cephfs storage is using erasure coding at 4:2. The crush domain is
set t
Periodic discard was actually attempted in the past:
https://github.com/ceph/ceph/pull/20723
A proper implementation would probably need appropriate
scheduling/throttling that can be tuned so as to balance against
client I/O impact.
Josh
On Sat, Mar 2, 2024 at 6:20 AM David C. wrote:
>
> Could
Could we not consider setting up a “bluefstrim” which could be orchestrated
?
This would avoid having a continuous stream of (D)iscard instructions on
the disks during activity.
A weekly (probably monthly) bluefstrim could probably be enough for
platforms that really need it.
Le sam. 2 mars 202
We've had a specific set of drives that we've had to enable
bdev_enable_discard and bdev_async_discard for in order to maintain
acceptable performance on block clusters. I wrote the patch that Igor
mentioned in order to try and send more parallel discards to the
devices, but these ones in parti
I came across an enterprise NVMe used for BlueFS DB whose performance
dropped sharply after a few months of delivery (I won't mention the brand
here but it was not among these 3: Intel, Samsung, Micron).
It is clear that enabling bdev_enable_discard impacted performance, but
this option also saved