Kinda what he said, but I use Zabbix.
https://docs.ceph.com/en/latest/mgr/zabbix/
On Tue, Jun 18, 2024 at 11:53 AM Anthony D'Atri
wrote:
> I don't, I have the fleetwide monitoring / observability systems query
> ceph_exporter and a fleetwide node_exporter instance on 9101. ymmv.
>
>
> > On Jun
retrofitting the guts of a Dell PE R7xx server is not straightforward. You
could be looking into replacing the motherboard, the backplane, and so
forth.
You can probably convert the H755N card to present the drives to the OS, so
you can use them for Ceph. This may be AHCI mode, pass-through mode,
the documentation's guidelines and
> then something goes terribly wrong.
>
> Thanks again,
> -Drew
>
>
> -Original Message-----
> From: Anthony D'Atri
> Sent: Thursday, July 11, 2024 7:24 PM
> To: Drew Weaver
> Cc: John Jasen ; ceph-users@ceph.io
> Subject:
cluster_network is an optional add-on to handle some of the internal ceph
traffic. Your mon address needs to be accessible/routable for anything
outside your ceph cluster that wants to consume it. That should also be in
your public_network range.
I stumbled over this a few times in figuring out ho
If the documentation is to be believed, it's just install the zabbix
sender, then;
ceph mgr module enable zabbix
ceph zabbix config-set zabbix_host my-zabbix-server
(Optional) Set the identifier to the fsid.
And poof. I should now have a discovered entity on my zabbix server to add
templates to
/ceph-container/issues/1651
As such, may I recommend marking the Ceph documentation to this effect?
Possibly referring to Zabbix instructions with Agent 2?
On Fri, Mar 22, 2024 at 7:04 PM John Jasen wrote:
> If the documentation is to be believed, it's just install the zabbix
>
Ceph version 17.2.6
After a power loss event affecting my ceph cluster, I've been putting
humpty dumpty back together since.
One problem I face is that with objects degraded, rebalancing doesn't run
-- and this resulted in several of my fast OSDs filling up.
I have 8 OSDs currently down, 100% fu
that, see
>
> https://docs.ceph.com/en/quincy/ceph-volume/lvm/newdb/
>
>
> Thanks,
>
> Igor
>
> On 25.11.2024 21:37, John Jasen wrote:
> > Ceph version 17.2.6
> >
> > After a power loss event affecting my ceph cluster, I've been putting
> &g
attach new LV (provided by
> user) to specific OSD.
>
>
> Thanks,
>
> Igor
>
>
> On 26.11.2024 19:16, John Jasen wrote:
>
> They're all bluefs_single_shared_device, if I understand your question.
> There's no room left on the devices to expand.
>
ached or not? If so then icreasing
> these thresholds by 1-2% may help avoiding the crash, no?
>
> Also, if BlueFS is aware of these thresholds, shouldn't an OSDs be able to
> start and live without crashing even when it's full and simply (maybe
> easier said than done...) refuse
10 matches
Mail list logo