I have a number of drives in my fleet with old firmware that seems to have
discard / TRIM bugs, as in the drives get bricked.
Much worse is that since they're on legacy RAID HBAs, many of them can't be
updated.
ymmv.
> On Mar 1, 2024, at 13:15, Igor Fedotov wrote:
>
> I played with this feat
I played with this feature a while ago and recall it had visible
negative impact on user operations due to the need to submit tons of
discard operations - effectively each data overwrite operation triggers
one or more discard operation submission to disk.
And I doubt this has been widely used
Hi,
I'll try the 'ceph mgr fail' and report back.
In the meantime, my problem with the images...
I am trying to use my local registry to deploy the different services. I
don't know how to use the 'apply' and force my cluster to use my local
registry.
So basically, what I am doing so far is :
1 -
There have been bugs in the past where things have gotten "stuck". Usually
I'd say check the REFRESHED column in the output of `ceph orch ps`. It
should refresh the daemons on each host roughly every 10 minutes, so if you
see some value much larger than that, things are probably actually stuck.
If
Not really, as unfortunately the cache eviction fails for some rbd
objects that still hace some "lock", right now we need to understand
why the eviction fails on these objects, and find a solution to have
the cache eviction fully working. I will provide more information
later on.
If you have any p
Is there any update on this? Did someone test the option and has
performance values before and after?
Is there any good documentation regarding this option?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le.
Hi,
I have finished the conversion from ceph-ansible to cephadm yesterday.
Everything seemed to be working until this morning, I wanted to redeploy
rgw service to specify the network to be used.
So I deleted the rgw services with ceph orch rm, then I prepared a yml file
with the new conf. I appli