The guide on migrating from filestore to bluestore was perfect. I was
able to get that OSD back up and running quickly. Thanks.
As for my PGs, I tried force-create-pg and it said it was working on it
for a while, and I saw some deep scrubs happening, but when they were
done it didn't help the in
On Thu, Jun 27, 2019 at 10:36 AM ☣Adam wrote:
> Well that caused some excitement (either that or the small power
> disruption did)! One of my OSDs is now down because it keeps crashing
> due to a failed assert (stacktraces attached, also I'm apparently
> running mimic, not luminous).
>
> In the
Well that caused some excitement (either that or the small power
disruption did)! One of my OSDs is now down because it keeps crashing
due to a failed assert (stacktraces attached, also I'm apparently
running mimic, not luminous).
In the past a failed assert on an OSD has meant removing the disk,
Have you tried: ceph osd force-create-pg ?
If that doesn't work: use objectstore-tool on the OSD (while it's not
running) and use it to force mark the PG as complete. (Don't know the exact
command off the top of my head)
Caution: these are obviously really dangerous commands
Paul
--
Paul E
The fullratio was ignored, that's why that happenned most likely. I
can't delete pgs, because it's only kb's worth of space - the osd is
40gb, 39.8 gb is taken up by omap - that's why i can't move/extract. Any
clue on how to compress/move away the omap dir?
On 27/08/18 12:34, Paul Emmerich w
Don't ever let an OSD run 100% full, that's usually bad news.
Two ways to salvage this:
1. You can try to extract the PGs with ceph-objectstore-tool and
inject them into another OSD; Ceph will find them and recover
2. You seem to be using Filestore, so you should easily be able to
just delete a wh
7071 192.168.252.196:6802/7071
192.168.252.196:6803/7071 exists,up 8b1c2bbb-b2f0-4974-b0f5-266c558cc732
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
jan.zel...@id.unibe.ch
Sent: Friday, May 23, 2014 6:31 AM
To: mich...@onlinefusion.co.uk; ceph-users@lists.ceph.com
Subject: Re: [c
et: Freitag, 23. Mai 2014 12:36
An: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck unclean
64 PG's per pool shouldn't cause any issues while there's only 3 OSD's. It'll
be something to pay attention to if a lot more get
iso.com]
> Gesendet: Freitag, 23. Mai 2014 13:20
> An: Zeller, Jan (ID)
> Cc: ceph-users@lists.ceph.com
> Betreff: Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck
> unclean
>
> Hi,
>
> if you use debian,
>
> try to use a recent kernel from b
> -Ursprüngliche Nachricht-
> Von: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Gesendet: Freitag, 23. Mai 2014 13:20
> An: Zeller, Jan (ID)
> Cc: ceph-users@lists.ceph.com
> Betreff: Re: [ceph-users] pgs incomplete; pgs stuck inactive; pgs stuck
> unclean
>
Hi,
if you use debian,
try to use a recent kernel from backport (>3.10)
also check your libleveldb1 version, it should be 1.9.0-1~bpo70+1 (debian
wheezy version is too old)
I don't see it in ceph repo:
http://ceph.com/debian-firefly/pool/main/l/leveldb/
(only for squeeze ~bpo60+1)
but you c
64 PG's per pool /shouldn't/ cause any issues while there's only 3
OSD's. It'll be something to pay attention to if a lot more get added
through.
Your replication setup is probably anything other than host.
You'll want to extract your crush map then decompile it and see if your
"step" is set t
Try increasing the placement groups for pools
ceph osd pool set data pg_num 128
ceph osd pool set data pgp_num 128
similarly for other 2 pools as well.
- karan -
On 23 May 2014, at 11:50, jan.zel...@id.unibe.ch wrote:
> Dear ceph,
>
> I am trying to setup ceph 0.80.1 with the following com
Hi Sage,
i uploaded the query to http://yadi.sk/d/XoyLElnCDrc6Q
last time, after i saw "slow request" in osd.4,
i removed, format and re-added osd.4
but after i saw this query, i found many "acting 4". and i think that
indicate that this PG was in osd.4 before.
currently pg 4.7d acting in 0,6
ceph pg 4.7d query
will tell you which OSDs it wants to talk to in order to make the PG
complete (or what other information it needs).
sag
On Thu, 5 Dec 2013, Rzk wrote:
> Hi All,
>
> I found 6 pgs incomplete while "ceph health detail" after 3 osds down,
> but after i manage to start again al
15 matches
Mail list logo