Hello!
Unfortunately, our single-node-"Cluster" with 11 ODSs is broken because some
ODSs crash when they start peering.
I'm on Ubuntu 18.04 with Ceph Mimic (13.2.2).
The problem was induced by when RAM was filled up and ODS processes then
crashed because of memory allocation failures.
No weird
Thanks, I did that now: https://tracker.ceph.com/issues/36337
On 05/10/2018 19.12, Neha Ojha wrote:
> Hi JJ,
>
> In the case, the condition olog.head >= log.tail is not true,
> therefore it crashes. Could you please open a tracker
> issue(https://tracker.ceph.com/) and attach the osd logs and the
You need to add or generate a certificate, without it the dashboard doesn't
start.
The procedure is described in the documentation.
-- JJ
On 09/10/2018 00.05, solarflow99 wrote:
> seems like it did, yet I don't see anything listening on the port it should
> be for dashboard.
>
> # ceph mgr mod
script to automate the seek-and-destroy of the broken PGs.
https://gist.github.com/TheJJ/c6be62e612ac4782bd0aa279d8c82197
Cheers
JJ
On 04/10/2018 18.29, Jonas Jelten wrote:
> Hello!
>
> Unfortunately, our single-node-"Cluster" with 11 ODSs is broken because some
> OD
Hello!
My cluster currently has this health state:
2018-10-31 21:20:13.694633 mon.lol [WRN] Health check update:
39010709/192173470 objects misplaced (20.300%)
(OBJECT_MISPLACED)
2018-10-31 21:20:13.694684 mon.lol [WRN] Health check update: Degraded data
redundancy: 1624786/192173470 objects
de
tal with all the changes
> that came in while waiting or while finishing the first backfill, then
> become active+clean.
> Nothing to worry about, that is how recovery looks on all clusters.
>
> Den ons 31 okt. 2018 kl 22:29 skrev Jonas Jelten :
>>
>> Hello!
>>
&
Maybe you are hitting the kernel bug worked around by
https://github.com/ceph/ceph/pull/23273
-- Jonas
On 12/11/2018 16.39, Ashley Merrick wrote:
> Is anyone else seeing this?
>
> I have just setup another cluster to check on completely different hardware
> and everything running EC still.
>
Hi!
I've created a mount.ceph.c replacement in Python which also utilizes the
kernel keyring and does name resolutions.
You can mount a CephFS without installing Ceph that way (and without using the
legacy secret= mount option).
https://github.com/SFTtech/ceph-mount
When you place the script (
On 02/04/2019 15.05, Yan, Zheng wrote:
> I don't use this feature. We don't have plan to mark this feature
> stable. (probably we will remove this feature in the furthure).
Oh no! We have activated inline_data since our cluster does have lots of small
files (but also big ones), and
performance i
Hi!
I'm also affected by this:
HEALTH_WARN 13 pgs not deep-scrubbed in time; 13 pgs not scrubbed in time
PG_NOT_DEEP_SCRUBBED 13 pgs not deep-scrubbed in time
pg 6.b1 not deep-scrubbed since 0.00
pg 7.ac not deep-scrubbed since 0.00
pg 7.a0 not deep-scrubbed since 0.00
When I run:
rbd map --name client.lol poolname/somenamespace/imagename
The image is mapped to /dev/rbd0 and
/dev/rbd/poolname/imagename
I would expect the rbd to be mapped to (the rbdmap tool tries this name):
/dev/rbd/poolname/somenamespace/imagename
The current map point would not all
hi!
I've also noticed that behavior and have submitted a patch some time ago that
should fix (2):
https://github.com/ceph/ceph/pull/27288
But it may well be that there's more cases where PGs are not discovered on
devices that do have them. Just recently a
lot of my data was degraded and then re
12 matches
Mail list logo