Hi Folks
We are using Ceph as our storage backend on our 6 Node Proxmox VM Cluster. To
Monitor our systems we use Zabbix and i would like to get some Ceph Data into
our Zabbix to get some alarms when something goes wrong.
Ceph mgr has a module, "zabbix" that uses "zabbix-sender" to actively sen
Hi Konstantin,
we noticed that the slow_used_bytes values were slowly decreasing for
each OSD without any interaction, but today I decided to speed things
up. Issuing 'ceph daemon osd. compact' reduced the value to 0 for
all OSDs.
Thank you!
Regards,
Eugen
Zitat von Konstantin Shalygin
Hi all,
I'm evaluation cephfs to serve our business as a file share that span
across our 3 datacenters. One concern that I have is that when using cephfs
and OpenStack Manila is that all guest vms needs access to the public
storage net. This to me feels like a security concern. I've seen one
sugge
SELinux problem perhaps?
On Mon, Oct 7, 2019, 3:16 AM wrote:
> Hi Folks
>
> We are using Ceph as our storage backend on our 6 Node Proxmox VM Cluster.
> To Monitor our systems we use Zabbix and i would like to get some Ceph Data
> into our Zabbix to get some alarms when something goes wrong.
>
>
Hi Yordan,
this is mimic documentation and these snippets aren't valid for Nautilus
any more. They are still present in Nautilus pages though..
Going to create a corresponding ticket to fix that.
Relevant Nautilus changes for 'ceph df [detail]' command can be found in
Nautilus release note
>From the logs, it sounds like the Ceph stuff is all working but Zabbix_sender
>is failing for some reason. Try running Zabbix_sender manually and see if it
>works or not. See
>https://www.zabbix.com/documentation/4.2/manual/concepts/sender for an example
>on how to do that. Also, make sure you
>If the journal is no longer readable: the safe variant is to
>completely re-create the OSDs after replacing the journal disk. (The
>unsafe way to go is to just skip the --flush-journal part, not
>recommended)
Hello paul,
Thank for your reply.we has replaced the journal disk.
Last week we were