Hi Stefan,
thanks for that hint. We use xfs on a dedicated RAID array for the MON stores.
I'm not sure if I have seen elections caused by trimming, I will keep an eye on
it.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Hi,
yes you can rename a node without massive rebalancing.
The following I tested with pacific. But I think this should work with
older versions as well.
You need to rename the node in the crushmap between shutting down the
node with the old name and starting it with the new name.
You only must k
Hi Mike,
I have two questions about Cephalocon 2023.
1. Will this event only be held as on-site (no virtual platform)?
2. Will the session records be available on YouTube as other Ceph events?
Thanks,
Satoru
___
ceph-users mailing list -- ceph-users@c
Hello,
After upgrading a lot of iDRAC9 modules to version 6.10 in servers that are
involved in a Ceph cluster we noticed that the iDRAC9 shows the write endurance
as 0% on any non-certified disk.
OMSA still shows the correct remaining write endurance but I am assuming that
they are working fev
On Tue, Feb 14, 2023 at 04:00:30PM +, Drew Weaver wrote:
> What are you folks using to monitor your write endurance on your SSDs that
> you couldn't buy from Dell because they had a 16 week lead time while the MFG
> could deliver the drives in 3 days?
Our Ceph servers are SuperMicro not Dell
A bug was reported recently where if a put object occurs when bucket resharding
is finishing up, it would write to the old bucket shard rather than the new
one. From your logs there is evidence that resharding is underway alongside the
put object.
A fix for that bug is on main and pacific, and
That is pretty awesome, I will look into doing it that way. All of our
monitoring is integrated to use the very very expensive DRAC enterprise license
we pay for (my fault for trusting Dell).
We are looking for a new hardware vendor but this will likely work for the
mistake we already made.
Th
We are happy to announce another release of the go-ceph API library. This
is a
regular release following our every-two-months release cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.20.0
Changes include additions to the rbd, rgw and cephfs packages. More details
are
available at the link
Hi,
You can use smartctl_exporter [1] for all your media, not only the SSD
k
[1] https://github.com/prometheus-community/smartctl_exporter
Sent from my iPhone
> On 14 Feb 2023, at 23:01, Drew Weaver wrote:
> Hello,
>
> After upgrading a lot of iDRAC9 modules to version 6.10 in servers tha