Hi
0 Brilliant I recovered my data.
1 Gregory, Joao, John, Samuel : Thank a lot for all the help and to have
responded at each time.
2 It's my fault, if i am pass to 0.82. And it's good, if that helped you
to find some bugs ;)
3 With this fear, we will recreate our cluster in firefly.
Hi Pierre,
Unfortunately it looks like we had a bug in 0.82 that could lead to
journal corruption of the sort you're seeing here. A new journal
format was added, and on the first start after an update the MDS would
re-write the journal to the new format. This should only have been
happening on t
Le 16/07/2014 22:40, Gregory Farnum a écrit :
On Wed, Jul 16, 2014 at 6:21 AM, Pierre BLONDEAU
wrote:
Hi,
After the repair process, i have :
1926 active+clean
2 active+clean+inconsistent
This two PGs seem to be on the same osd ( #34 ):
# ceph pg dump | grep inconsistent
dumped all in form
On Wed, Jul 16, 2014 at 6:21 AM, Pierre BLONDEAU
wrote:
> Hi,
>
> After the repair process, i have :
> 1926 active+clean
>2 active+clean+inconsistent
>
> This two PGs seem to be on the same osd ( #34 ):
> # ceph pg dump | grep inconsistent
> dumped all in format plain
> 0.2e4 0
Hi,
After the repair process, i have :
1926 active+clean
2 active+clean+inconsistent
This two PGs seem to be on the same osd ( #34 ):
# ceph pg dump | grep inconsistent
dumped all in format plain
0.2e4 0 0 0 8388660 4 4
active+clean+inconsistent 2014-0
Hi,
Great.
All my OSD restart :
osdmap e438044: 36 osds: 36 up, 36 in
All PG page are active and some in recovery :
1604040/49575206 objects degraded (3.236%)
1780 active+clean
17 active+degraded+remapped+backfilling
61 active+degraded+remapped+wait_backfill
11 active+clean+scrubbing+deep
On 07/09/2014 02:22 PM, Pierre BLONDEAU wrote:
Hi,
There is any chance to restore my data ?
Okay, I talked to Sam and here's what you could try before anything else:
- Make sure you have everything running on the same version.
- unset the the chooseleaf_vary_r flag -- this can be accomplished
On 07/09/2014 02:22 PM, Pierre BLONDEAU wrote:
Hi,
There is any chance to restore my data ?
Hello Pierre,
I've been giving this some thought and my guess is that yes, it should
be possible. However, it may not be a simple fix.
So, first of all, you got bit by http://tracker.ceph.com/issue
Hi,
There is any chance to restore my data ?
Regards
Pierre
Le 07/07/2014 15:42, Pierre BLONDEAU a écrit :
No chance to have those logs and even less in debug mode. I do this
change 3 weeks ago.
I put all my log here if it's can help :
https://blondeau.users.greyc.fr/cephlog/all/
I have a ch
No chance to have those logs and even less in debug mode. I do this
change 3 weeks ago.
I put all my log here if it's can help :
https://blondeau.users.greyc.fr/cephlog/all/
I have a chance to recover my +/- 20TB of data ?
Regards
Le 03/07/2014 21:48, Joao Luis a écrit :
Do those logs have
Do those logs have a higher debugging level than the default? If not
nevermind as they will not have enough information. If they do however,
we'd be interested in the portion around the moment you set the tunables.
Say, before the upgrade and a bit after you set the tunable. If you want to
be finer
Le 03/07/2014 13:49, Joao Eduardo Luis a écrit :
On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote:
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; d
On 07/03/2014 12:15 AM, Pierre BLONDEAU wrote:
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
../ceph/src/osdm
Yes, thanks.
-Sam
On Wed, Jul 2, 2014 at 4:21 PM, Pierre BLONDEAU
wrote:
> Like that ?
>
> # ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version
> {"version":"0.82"}
> # ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version
> {"version":"0.82"}
> # ceph --admin-daemon /var/run/
Like that ?
# ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version
{"version":"0.82"}
# ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version
{"version":"0.82"}
# ceph --admin-daemon /var/run/ceph/ceph-mon.joe.asok version
{"version":"0.82"}
Pierre
Le 03/07/2014 01:17, Samuel
Can you confirm from the admin socket that all monitors are running
the same version?
-Sam
On Wed, Jul 2, 2014 at 4:15 PM, Pierre BLONDEAU
wrote:
> Le 03/07/2014 00:55, Samuel Just a écrit :
>
>> Ah,
>>
>> ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
>> /tmp/crush$i osd-$i*;
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62BB79__none'
../ceph/src/osdmaptool: exported
Yeah, divergent osdmaps:
555ed048e73024687fc8b106a570db4f osd-20_osdmap.13258__0_4E62BB79__none
6037911f31dc3c18b05499d24dcdbe5c osd-23_osdmap.13258__0_4E62BB79__none
Joao: thoughts?
-Sam
On Wed, Jul 2, 2014 at 3:39 PM, Pierre BLONDEAU
wrote:
> The files
>
> When I upgrade :
> ceph-deploy ins
Joao: this looks like divergent osdmaps, osd 20 and osd 23 have
differing ideas of the acting set for pg 2.11. Did we add hashes to
the incremental maps? What would you want to know from the mons?
-Sam
On Wed, Jul 2, 2014 at 3:10 PM, Samuel Just wrote:
> Also, what version did you upgrade from,
Also, what version did you upgrade from, and how did you upgrade?
-Sam
On Wed, Jul 2, 2014 at 3:09 PM, Samuel Just wrote:
> Ok, in current/meta on osd 20 and osd 23, please attach all files matching
>
> ^osdmap.13258.*
>
> There should be one such file on each osd. (should look something like
> o
Ok, in current/meta on osd 20 and osd 23, please attach all files matching
^osdmap.13258.*
There should be one such file on each osd. (should look something like
osdmap.6__0_FD6E4C01__none, probably hashed into a subdirectory,
you'll want to use find).
What version of ceph is running on your mon
Hi,
I do it, the log files are available here :
https://blondeau.users.greyc.fr/cephlog/debug20/
The OSD's files are really big +/- 80M .
After starting the osd.20 some other osd crash. I pass from 31 osd up to
16. I remark that after this the number of down+peering PG decrease from
367 to
You should add
debug osd = 20
debug filestore = 20
debug ms = 1
to the [osd] section of the ceph.conf and restart the osds. I'd like
all three logs if possible.
Thanks
-Sam
On Wed, Jul 2, 2014 at 5:03 AM, Pierre BLONDEAU
wrote:
> Yes, but how i do that ?
>
> With a command like that ?
>
> cep
Yes, but how i do that ?
With a command like that ?
ceph tell osd.20 injectargs '--debug-osd 20 --debug-filestore 20
--debug-ms 1'
By modify the /etc/ceph/ceph.conf ? This file is really poor because I
use udev detection.
When I have made these changes, you want the three log files or only
Can you reproduce with
debug osd = 20
debug filestore = 20
debug ms = 1
?
-Sam
On Tue, Jul 1, 2014 at 1:21 AM, Pierre BLONDEAU
wrote:
> Hi,
>
> I join :
> - osd.20 is one of osd that I detect which makes crash other OSD.
> - osd.23 is one of osd which crash when i start osd.20
> - mds, is one
Hi,
I join :
- osd.20 is one of osd that I detect which makes crash other OSD.
- osd.23 is one of osd which crash when i start osd.20
- mds, is one of my MDS
I cut log file because they are to big but. All is here :
https://blondeau.users.greyc.fr/cephlog/
Regards
Le 30/06/2014 17:35, Gre
What's the backtrace from the crashing OSDs?
Keep in mind that as a dev release, it's generally best not to upgrade
to unnamed versions like 0.82 (but it's probably too late to go back
now).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Jun 30, 2014 at 8:06 AM, Pierr
Hi,
After the upgrade to firefly, I have some PG in peering state.
I seen the output of 0.82 so I try to upgrade for solved my problem.
My three MDS crash and some OSD triggers a chain reaction that kills
other OSD.
I think my MDS will not start because of the metadata are on the OSD.
I have
29 matches
Mail list logo