Hi,
I'm sorry, I was searching in the wrong way, but now things are doing
well.
Thank you.
Best regards,
Paulo Carvalho
- Mensagem original-
De: Paulo Carvalho
Para: ceph-users@ceph.io
Assunto: iSCSI HA (ALUA): Single disk image shared by multiple iSCSI
gateways
Data: Thu, 29 Jul 202
Hi,
we have enabled Cluster → Monitoring in the Dashboard. Some of the
regularly shown messages are not really useful for us (packet drops
in OVS) and we want to suppress them. Creating a silence does not
help, because the messages still appear, but in blue instead of red
color.
Is there a way t
Hi,
you can disable or modify the configured alerts in:
/var/lib/ceph//etc/prometheus/alerting/ceph_alerts.yml
After restarting the container those changes should be applied.
Regards,
Eugen
Zitat von E Taka <0eta...@gmail.com>:
Hi,
we have enabled Cluster → Monitoring in the Dashboard. S
Hi Shain,
Thanks for the update. I didn't find any screenshot in your previous email
(maybe the list server removed that). Just for tracking purposes and for
other users hitting this very same issue, would you mind creating a tracker
here (https://tracker.ceph.com/projects/dashboard/issues/new) an
Hi all !
We are facing strange behaviors from two clusters we have at work (both v15.2.9
/ CentOS 7.9):
* In the 1st cluster we are getting errors about multiple degraded pgs and
all of them are linked with a "rogue" osd which ID is very big (as
"osd.2147483647"). This osd doesn't show wi
Den fre 30 juli 2021 kl 15:22 skrev Thierry MARTIN
:
> Hi all !
> We are facing strange behaviors from two clusters we have at work (both
> v15.2.9 / CentOS 7.9):
> * In the 1st cluster we are getting errors about multiple degraded pgs
> and all of them are linked with a "rogue" osd which ID
Hi people,
I try to create a Multi-zone-group setup (like it is described here:
https://docs.ceph.com/en/latest/radosgw/multisite/)
But I simply fail.
I just created a testcluster to mess with it, and no matter how I try to.
Is there a howto avaialable?
I don't want to get a multi-zone setup,
Hello,
I'm seeking some community opinions about the stability of Cephadm on a
recent Ceph release ,like V16.2.5.
Cephadm looks like a more streamlined and quicker initial deployment
process, but I'd like to hear thoughts from someone who has lived with it
for some time. Additionally, I see less
Hi,
I have two sites with OpenStack Victoria deployed by Kolla and Ceph Octopus
deployed by cephadm. As what I know, either Swift (implemented by RADOSGW)
or RBD is supported to be the backend of cinder-backup. My intention is to use
one of those option to replicate Cinder volume from one site to
Hi Mark,
Thanks for your response. I did manual compaction on all osds using
ceph-kvstore-tool. It reduced the number of slow ops but It didn't solve
the problem completely.
On Mon, Jul 26, 2021 at 8:06 PM Mark Nelson wrote:
> Yeah, I suspect that regular manual compaction might be the necessary
10 matches
Mail list logo