Am 04.07.2017 um 17:58 schrieb Etienne Menguy:
> rados list-inconsistent-ob
Hello,
Sorry for my late reply. We installed some new Server and now wie have
osd pool default size = 3.
At this Point i tried again to repair the with ceph pg rair and ceph pg
deep-srub. I tried to delete again rados O
ld not use Ceph with raid6.
Your data should already be safe with Ceph.
Etienne
From: ceph-users on behalf of Hauke Homburg
Sent: Tuesday, July 4, 2017 17:41
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Cluster with Deeo Scrub Error
Am 02.07.
Am 02.07.2017 um 13:23 schrieb Hauke Homburg:
> Hello,
>
> Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2
> and ceph 10.0.2.5. All OSD running in a RAID6.
> In this Cluster i have Deep Scrub Error:
> /var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1
> log_cha
Hello,
Ich have Ceph Cluster with 5 Ceph Servers, Running unter CentOS 7.2 and
ceph 10.0.2.5. All OSD running in a RAID6.
In this Cluster i have Deep Scrub Error:
/var/log/ceph/ceph-osd.6.log-20170629.gz:389 .356391 7f1ac4c57700 -1
log_channel(cluster) log [ERR] : 1.129 deep-scrub 1 errors
This L