Hi all,
I have a cluster running cephfs on Luminous 12.2.4, using 2 active MDSes + 1
standby. I have 3 shares: /projects, /home and /scratch, and I've decided to
try manual pinning as described here:
http://docs.ceph.com/docs/master/cephfs/multimds/
/projects is pinned to mds.0 (rank 0)
/ho
That "nicely exporting" thing is a logging issue that was apparently
fixed in https://github.com/ceph/ceph/pull/19220. I'm not sure if that
will be backported to luminous.
Otherwise the slow requests could be due to either slow trimming (see
previous discussions about mds log max expiring and mds
Hi all,
We have a ceph kraken cluster. Last week, we lost an OSD server. Then we
added one more OSD servers with same configuration.Then we let cluster to
recover,but i think it didn't happened.Still most of PG's are stuck in
remapped and in degraded state. When i restart all osd daemons, it itse
Hi Dan,
Thanks! Ah so the "nicely exporting" thing is just a distraction, that's good
to know.
I did bump mds log max segments and max expiring to 240 after reading the
previous discussion. It seemed to help when there was just 1 active MDS. It
doesn't really do much at the moment, although
On Fri, Apr 20, 2018 at 11:29 AM, Charles Alva wrote:
> Marc,
>
> Thanks.
>
> The mgr log spam occurs even without dashboard module enabled. I never
> checked the ceph mgr log before because the ceph cluster is always healthy.
> Based on the ceph mgr logs in syslog, the spam occurred long before a
2018-04-24 3:24 GMT+02:00 Christian Balzer :
> Hello,
>
Hi Christian, and thanks for your detailed answer.
> On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote:
>
>> Hello everyone,
>>
>> I am in the process of designing a Ceph cluster, that will contain
>> only SSD OSDs, and I was wonderi
Hi,
We are currently using Jewel 10.2.7 and recently, we have been experiencing
some issues with objects being deleted using the gc. After a bucket was
unsuccessfully deleted using –purge-objects (first error next discussed
occurred), all of the rgw’s are occasionally becoming unresponsive and
requ
On 04/24/2018 05:01 AM, Mohamad Gebai wrote:
>
>
> On 04/23/2018 09:24 PM, Christian Balzer wrote:
>>
>>> If anyone has some ideas/thoughts/pointers, I would be glad to hear them.
>>>
>> RAM, you'll need a lot of it, even more with Bluestore given the current
>> caching.
>> I'd say 1GB per TB
Hi Sean,
Could you create an issue in tracker.ceph.com with this info? That
would make it easier to iterate on.
thanks and regards,
Matt
On Tue, Apr 24, 2018 at 10:45 AM, Sean Redmond wrote:
> Hi,
> We are currently using Jewel 10.2.7 and recently, we have been experiencing
> some issues with
Hi,
it's been a while, but we are still fighting with this issue.
As suggested we deleted all snapshots, but the errors still occur.
We were able to gather some more information:
The reason why they are crashing is this assert:
https://github.com/ceph/ceph/blob/luminous/src/osd/PrimaryLogPG.cc#
Hi, friends.
We use RGW user stats in our billing.
Example on Luminous:
radosgw-admin usage show --uid 5300c830-82e2-4dce-ac6d-1d97a65def33
{
"entries": [
{
"user": "5300c830-82e2-4dce-ac6d-1d97a65def33",
"buckets": [
{
"bu
Hi,
Last night I posted the Cephalocon 2018 conference report on the Ceph
blog[1], published the video recordings from the sessions on
YouTube[2] and the slide decks on Slideshare[3].
[1] https://ceph.com/community/cephalocon-apac-2018-report/
[2] https://www.youtube.com/playlist?list=PLrBUGiINAa
Hi,
sure no problem, I posted it here
http://tracker.ceph.com/issues/23839
On Tue, 24 Apr 2018, 16:04 Matt Benjamin, wrote:
> Hi Sean,
>
> Could you create an issue in tracker.ceph.com with this info? That
> would make it easier to iterate on.
>
> thanks and regards,
>
> Matt
>
> On Tue, Apr
On Tue, Apr 24, 2018 at 11:30 PM, Leonardo Vaz wrote:
> Hi,
>
> Last night I posted the Cephalocon 2018 conference report on the Ceph
> blog[1], published the video recordings from the sessions on
> YouTube[2] and the slide decks on Slideshare[3].
>
> [1] https://ceph.com/community/cephalocon-apac
In examples I see that each host has a section in ceph.conf, on every host
(host-a has a section in its conf on host-a, but there's also a host-a
section in the ceph.conf on host-b, etc.) Is this really necessary? I've
been using just generic osd and monitor sections, and that has worked out
fin
On 24.04.2018 18:24, Robert Stanford wrote:
In examples I see that each host has a section in ceph.conf, on every
host (host-a has a section in its conf on host-a, but there's also a
host-a section in the ceph.conf on host-b, etc.) Is this really
necessary? I've been using just generic osd
On 24.04.2018 17:30, Leonardo Vaz wrote:
Hi,
Last night I posted the Cephalocon 2018 conference report on the Ceph
blog[1], published the video recordings from the sessions on
YouTube[2] and the slide decks on Slideshare[3].
[1] https://ceph.com/community/cephalocon-apac-2018-report/
[2] https:
Hi All,
I seem to be seeing consitently poor read performance on my cluster
relative to both write performance and read perormance of a single
backend disk, by quite a lot.
cluster is luminous with 174 7.2k SAS drives across 12 storage servers
with 10G ethernet and jumbo frames. Drives are mix 4
Hello Linh,
On Tue, Apr 24, 2018 at 12:34 AM, Linh Vu wrote:
> However, on our production cluster, with more powerful MDSes (10 cores
> 3.4GHz, 256GB RAM, much faster networking), I get this in the logs
> constantly:
>
> 2018-04-24 16:29:21.998261 7f02d1af9700 0 mds.1.migrator nicely exporting
>
Hello cephers,
We're glad to announce the fifth bugfix release of Luminous v12.2.x long
term stable
release series. This release contains a range of bug fixes across all
compoenents of Ceph. We recommend all the users of 12.2.x series to
update.
Notable Changes
---
* MGR
The ce
Neither the issue I created nor Michael's [1] ticket that it was rolled
into are getting any traction. How are y'all fairing with your clusters?
I've had 3 PGs inconsistent with 5 scrub errors for a few weeks now. I
assumed that the third PG was just like the first 2 in that it couldn't be
scrubb
Thanks Patrick! Good to know that it's nothing and will be fixed soon :)
From: Patrick Donnelly
Sent: Wednesday, 25 April 2018 5:17:57 AM
To: Linh Vu
Cc: ceph-users
Subject: Re: [ceph-users] cephfs luminous 12.2.4 - multi-active MDSes with
manual pinning
Hello L
Hello,
On Tue, 24 Apr 2018 11:39:33 +0200 Florian Florensa wrote:
> 2018-04-24 3:24 GMT+02:00 Christian Balzer :
> > Hello,
> >
>
> Hi Christian, and thanks for your detailed answer.
>
> > On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote:
> >
> >> Hello everyone,
> >>
> >> I am in
23 matches
Mail list logo