Hi,
I have a cluster in "stale" state : a lots of RBD are blocked since ~10
hours. In the status I see PG in stale or down state, but thoses PG
doesn't seem to exists anymore :
root! stor00-sbg:~# ceph health detail | egrep '(stale|down)'
HEALTH_ERR noout,noscrub,nodeep-scrub flag(s) set; 1 nearf
Some more informations : the cluster was just upgraded from Jewel to
Luminous.
# ceph pg dump | egrep '(stale|creating)'
dumped all
15.32 10947 00 0 0 45870301184
3067 3067stale+active+clean 2018-06-04
09:20:42.5943
Hi,
looks like you are running into the PG overdose protection of Luminous (you
got > 200 PGs per OSD): try to increase mon_max_pg_per_osd on the monitors
to 300 or so to temporarily resolve this.
Paul
2018-06-05 9:40 GMT+02:00 Olivier Bonvalet :
> Some more informations : the cluster was just
On Tue, Jun 5, 2018 at 4:07 AM, 李昊华 wrote:
> Thanks for reading my questions!
>
> I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd.
> And I know KRBD is a kernel module and we can use KRBD to mount the RBD
> device on the operating systems.
>
> It is easy to use command li
Hi,
Good point ! Changing this value, *and* restarting ceph-mgr fix this
issue. Now we have to find a way to reduce PG account.
Thanks Paul !
Olivier
Le mardi 05 juin 2018 à 10:39 +0200, Paul Emmerich a écrit :
> Hi,
>
> looks like you are running into the PG overdose protection of
> Luminous
Hi,
I have created a cluster and when I run ceph status it is showing me the
wrong number of osds.
cluster 6571de66-75e1-4da7-b1ed-15a8bfed0944
health HEALTH_WARN
2112 pgs stuck inactive
2112 pgs stuck unclean
monmap e1: 1 mons at {0=10.38.32.245:16789/0}
It was either created incorrectly (no auth key?) or it can't contact the
monitor for some reason. The log file should tell you more.
Paul
2018-06-05 13:20 GMT+02:00 Muneendra Kumar M :
> Hi,
>
> I have created a cluster and when I run ceph status it is showing me the
> wrong number of osds.
>
>
On 05/06/18 05:58, kefu chai wrote:
> On Tue, Jun 5, 2018 at 6:13 AM, Paul Emmerich wrote:
>> Hi,
>>
>> 2018-06-04 20:39 GMT+02:00 Sage Weil :
>>> We'd love to build for stretch, but until there is a newer gcc for that
>>> distro it's not possible. We could build packages for 'testing', but I'
Hello,
I run proxmox 5.2 with ceph 12.2 (bluestore).
I've created an OSD on a Hard Drive (/dev/sda) and tried to put both WAL
and Journal on a SSD part (/dev/sde1) like this :
pveceph createosd /dev/sda --wal_dev /dev/sde1 --journal_dev /dev/sde1
It automaticaly creates 2 parts on the Hard Dr
On 05/06/18 14:49, rafael.diazmau...@univ-rennes1.fr wrote:
> Hello,
>
> I run proxmox 5.2 with ceph 12.2 (bluestore).
>
> I've created an OSD on a Hard Drive (/dev/sda) and tried to put both WAL and
> Journal on a SSD part (/dev/sde1) like this :
> pveceph createosd /dev/sda --wal_dev /dev/sde1
2018-06-05 6:58 GMT+02:00 kefu chai :
>
> thanks for sharing this, Paul ! does the built binary require any
> runtime dependency offered the testing repo? if the answer is no, i
> think we should offer the pre-built package for debian stable then.
>
It will by default produce binaries linking aga
Hey Cephers,
Sorry for the short notice, but the Ceph on ARM meeting scheduled for
today (Jun 5) has been canceled.
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-users mailing list
ceph-users@list
Hi,
On 27.05.2018 01:48, c...@elchaka.de wrote:
>
> Very interested to the Slides/vids.
Slides are now available:
https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 /
On 04.06.2018 21:08, Joao Eduardo Luis wrote:
On 06/04/2018 07:39 PM, Sage Weil wrote:
[1]
http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000603.html
[2]
http://lists.ceph.com/private.cgi/ceph-maintainers-ceph.com/2018-April/000611.html
Just a heads up, seems the ceph-
Hi,
I just saw this announcement and just wanted to "advertise" our Check_MK
plugin for Ceph:
https://github.com/HeinleinSupport/check_mk/tree/master/ceph
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax:
Hi,
After a RBD snapshot was removed, I seem to be having OSD's assert when they
try and recover pg 1.2ca. The issue seems to follow the
PG around as OSD's fail. I've seen this bug tracker and associated mailing list
post, but would appreciate if anyone can give any
pointers. https://tracker.cep
2018-06-05 17:42 GMT+02:00 Nick Fisk :
> Hi,
>
> After a RBD snapshot was removed, I seem to be having OSD's assert when
> they try and recover pg 1.2ca. The issue seems to follow the
> PG around as OSD's fail. I've seen this bug tracker and associated mailing
> list post, but would appreciate if
So, from what I can see I believe this issue is being caused by one of the
remaining OSD's acting for this PG containing a snapshot
file of the object
/var/lib/ceph/osd/ceph-46/current/1.2ca_head/DIR_A/DIR_C/DIR_2/DIR_D/DIR_0/rbd\udata.0c4c14238e1f29.000bf479__head_F930D2CA_
_1
/var/lib/c
From: ceph-users On Behalf Of Paul Emmerich
Sent: 05 June 2018 17:02
To: n...@fisk.me.uk
Cc: ceph-users
Subject: Re: [ceph-users] FAILED assert(p != recovery_info.ss.clone_snaps.end())
2018-06-05 17:42 GMT+02:00 Nick Fisk mailto:n...@fisk.me.uk>
>:
Hi,
After a RBD snapshot was
On Tue, 5 Jun 2018, Paul Emmerich wrote:
> 2018-06-05 17:42 GMT+02:00 Nick Fisk :
>
> > Hi,
> >
> > After a RBD snapshot was removed, I seem to be having OSD's assert when
> > they try and recover pg 1.2ca. The issue seems to follow the
> > PG around as OSD's fail. I've seen this bug tracker and a
Hi,
If anyone wants to play around with Ceph on Debian: I just made our mirror
for our
dev/test image builds public:
wget -q -O- 'https://static.croit.io/keys/release.asc' | apt-key add -
echo 'deb https://static.croit.io/debian-mimic/ stretch main' >>
/etc/apt/sources.list
apt update
apt install
Is it possible to stop the current running scrubs/deep-scrubs?
http://tracker.ceph.com/issues/11202
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Jun 5, 2018 at 4:46 PM, shrey chauhan
wrote:
> I am consistently getting whiteout mismatches due to which pgs are going in
> inconsistent state, and I am not able to figure out why is this happening?
> though as it was explained before that whiteouts dont exist and its nothing,
> its still
Hi Paul,
Thanks for your reply.
Looks like it is contacting the monitor properly as it shows the below o/p
from ceph status.Correct me if iam wrong
monmap e1: 1 mons at {0=10.38.32.245:16789/0}
election epoch 1, quorum 0 0
The reason could be that the osd’s are created incorre
24 matches
Mail list logo