Note that I still have scrub errors, but rados doesn't see thoses
objects :
root! brontes:~# rados -p hdd3copies ls | grep '^rb.0.15c26.238e1f29'
root! brontes:~#
Le vendredi 31 mai 2013 à 15:36 +0200, Olivier Bonvalet a écrit :
> Hi,
>
> sorry for the late answer : trying to fix that, I trie
Hi,
sorry for the late answer : trying to fix that, I tried to delete the
image (rbd rm XXX), the "rbd rm" complete without errors, but "rbd ls"
still display this image.
What should I do ?
Here the files for the PG 3.6b :
# find /var/lib/ceph/osd/ceph-28/current/3.6b_head/ -name
'rb.0.15c26.
Can you send the filenames in the pg directories for those 4 pgs?
-Sam
On Thu, May 23, 2013 at 3:27 PM, Olivier Bonvalet wrote:
> No :
> pg 3.7c is active+clean+inconsistent, acting [24,13,39]
> pg 3.6b is active+clean+inconsistent, acting [28,23,5]
> pg 3.d is active+clean+inconsistent, acting [
No :
pg 3.7c is active+clean+inconsistent, acting [24,13,39]
pg 3.6b is active+clean+inconsistent, acting [28,23,5]
pg 3.d is active+clean+inconsistent, acting [29,4,11]
pg 3.1 is active+clean+inconsistent, acting [28,19,5]
But I suppose that all PG *was* having the osd.25 as primary (on the
same
Do all of the affected PGs share osd.28 as the primary? I think the
only recovery is probably to manually remove the orphaned clones.
-Sam
On Thu, May 23, 2013 at 5:00 AM, Olivier Bonvalet wrote:
> Not yet. I keep it for now.
>
> Le mercredi 22 mai 2013 à 15:50 -0700, Samuel Just a écrit :
>> rb
Not yet. I keep it for now.
Le mercredi 22 mai 2013 à 15:50 -0700, Samuel Just a écrit :
> rb.0.15c26.238e1f29
>
> Has that rbd volume been removed?
> -Sam
>
> On Wed, May 22, 2013 at 12:18 PM, Olivier Bonvalet
> wrote:
> > 0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail.
> >
> >
rb.0.15c26.238e1f29
Has that rbd volume been removed?
-Sam
On Wed, May 22, 2013 at 12:18 PM, Olivier Bonvalet wrote:
> 0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail.
>
>
> Le mercredi 22 mai 2013 à 12:00 -0700, Samuel Just a écrit :
>> What version are you running?
>> -Sam
>>
>>
0.61-11-g3b94f03 (0.61-1.1), but the bug occured with bobtail.
Le mercredi 22 mai 2013 à 12:00 -0700, Samuel Just a écrit :
> What version are you running?
> -Sam
>
> On Wed, May 22, 2013 at 11:25 AM, Olivier Bonvalet
> wrote:
> > Is it enough ?
> >
> > # tail -n500 -f /var/log/ceph/osd.28.log
What version are you running?
-Sam
On Wed, May 22, 2013 at 11:25 AM, Olivier Bonvalet wrote:
> Is it enough ?
>
> # tail -n500 -f /var/log/ceph/osd.28.log | grep -A5 -B5 'found clone without
> head'
> 2013-05-22 15:43:09.308352 7f707dd64700 0 log [INF] : 9.105 scrub ok
> 2013-05-22 15:44:21.054
Is it enough ?
# tail -n500 -f /var/log/ceph/osd.28.log | grep -A5 -B5 'found clone without
head'
2013-05-22 15:43:09.308352 7f707dd64700 0 log [INF] : 9.105 scrub ok
2013-05-22 15:44:21.054893 7f707dd64700 0 log [INF] : 9.451 scrub ok
2013-05-22 15:44:52.898784 7f707cd62700 0 log [INF] : 9.78
Can you post your ceph.log with the period including all of these errors?
-Sam
On Wed, May 22, 2013 at 5:39 AM, Dzianis Kahanovich
wrote:
> Olivier Bonvalet пишет:
>>
>> Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
>>> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écr
Olivier Bonvalet пишет:
>
> Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
>> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
>>> I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
>>> repairing. How to repair it exclude re-creating of OSD?
Le lundi 20 mai 2013 à 00:06 +0200, Olivier Bonvalet a écrit :
> Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
> > I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
> > repairing. How to repair it exclude re-creating of OSD?
> >
> > Now it "easy" to cl
Great, thanks. I will follow this issue, and add informations if needed.
Le lundi 20 mai 2013 à 17:22 +0300, Dzianis Kahanovich a écrit :
> http://tracker.ceph.com/issues/4937
>
> For me it progressed up to ceph reinstall with repair data from backup (I help
> ceph die, but it was IMHO self-provo
http://tracker.ceph.com/issues/4937
For me it progressed up to ceph reinstall with repair data from backup (I help
ceph die, but it was IMHO self-provocation for force reinstall). Now (at least
to my summer outdoors) I keep v0.62 (3 nodes) with every pool size=3 min_size=2
(was - size=2 min_size=1
Le mardi 07 mai 2013 à 15:51 +0300, Dzianis Kahanovich a écrit :
> I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
> repairing. How to repair it exclude re-creating of OSD?
>
> Now it "easy" to clean+create OSD, but in theory - in case there are multiple
> OSDs - it may
Dzianis Kahanovich пишет:
> I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
> repairing. How to repair it exclude re-creating of OSD?
>
> Now it "easy" to clean+create OSD, but in theory - in case there are multiple
> OSDs - it may cause data lost.
OOPS! After re-creat
I have 4 scrub errors (3 PGs - "found clone without head"), on one OSD. Not
repairing. How to repair it exclude re-creating of OSD?
Now it "easy" to clean+create OSD, but in theory - in case there are multiple
OSDs - it may cause data lost.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http:/
18 matches
Mail list logo