Hello,
On this topic, I was trying to use Zabbix for alerting. Is there a way
to make the API Key used in the dashboard not expire after a period?
Regards,
Adam
On 6/18/24 09:12, Anthony D'Atri wrote:
I don't, I have the fleetwide monitoring / observability systems query
ceph_exporter and
Hello,
I have a single node host with a VM as a backup MON,MGR,ect.
This has caused all OSD's to be pending as 'deleting', can i safely
cancel this deletion request?
Regards,
Adam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe sen
Hello,
Do slow ops impact data integrity or can I generally ignore it? I'm
loading 3 hosts with a 10GB link and it saturating the disks or the OSDs.
2024-04-05T15:33:10.625922+ mon.CEPHADM-1 [WRN] Health check
update: 3 slow ops, oldest one blocked for 117 sec, daemons
[osd.0,osd.
Hello,
You will want to do this over WireGuard tech from experience, IOPS will
be brutal, like 200 IOPS.
Wireguard has a few benefits but notably:
- Higher rate of transfer per CPU load.
- State of the the art protocols. As opposed to some of the more
legacy systems.
- Extremely
Hello,
To save on power in my home lab can I have a single node CEPH cluster
sit idle and powered off for 3 months at a time then boot only to
refresh backups? Or will this cause issues I'm unaware of? I'm aware
deep-scrubbing will not happen, it would be done when in the boot-up
period ever
Hello,
It's all non-corperate data, I'm just trying to cut back on wattage
(removes around 450W of the 2.4KW) by powering down backup servers that
house 208TB while not being backed up or restoring.
ZFS sounds interesting however does it play nice with a mix of drive
sizes? That's primarily
wever, all the drives are identical.
I'm curious about this application of Ceph though, in home-lab use.
Performance likely isn't a top concern, just a durable persistent
storage target, so this is an interesting use case.
On 2024-05-21 17:02, adam.ther wrote:
Hello,
It's al