Hi,
on upgrade from 12.2.4 to 12.2.5 the balancer module broke (mgr crashes
minutes after service started).
Only solution was to disable the balancer (service is running fine since).
Is this fixed in 12.2.7?
I was unable to locate the bug in bugtracker.
Kevin
2018-07-17 18:28 GMT+02:00 Abhishek
Hi all,
This is just to report that I just upgraded smoothly from 12.2.6 to
12.2.7 (bluestore only, bitten by the "damaged mds" consequence of the
bad checksum on mds journal 0x200).
This was a really bad problem for CephFS. Hopefully, that cluster was
not in production yet (that's why I didn't as
Am 18.07.2018 um 16:20 schrieb Sage Weil:
> On Wed, 18 Jul 2018, Oliver Freyermuth wrote:
>> Am 18.07.2018 um 14:20 schrieb Sage Weil:
>>> On Wed, 18 Jul 2018, Linh Vu wrote:
Thanks for all your hard work in putting out the fixes so quickly! :)
We have a cluster on 12.2.5 with Bluest
On Wed, 18 Jul 2018, Oliver Freyermuth wrote:
> Am 18.07.2018 um 14:20 schrieb Sage Weil:
> > On Wed, 18 Jul 2018, Linh Vu wrote:
> >> Thanks for all your hard work in putting out the fixes so quickly! :)
> >>
> >> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS,
> >> not RGW
Am 18.07.2018 um 14:20 schrieb Sage Weil:
> On Wed, 18 Jul 2018, Linh Vu wrote:
>> Thanks for all your hard work in putting out the fixes so quickly! :)
>>
>> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS,
>> not RGW. In the release notes, it says RGW is a risk especially t
On Wed, 18 Jul 2018, Linh Vu wrote:
> Thanks for all your hard work in putting out the fixes so quickly! :)
>
> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS,
> not RGW. In the release notes, it says RGW is a risk especially the
> garbage collection, and the recommendatio
---
> *From:* ceph-users on behalf of Sage Weil
>
> *Sent:* Wednesday, 18 July 2018 4:42:41 AM
> *To:* Stefan Kooman
> *Cc:* ceph-annou...@ceph.com; ceph-de...@vger.kernel.org;
> ceph-main
From:* ceph-users on behalf of Sage
> Weil
> *Sent:* Wednesday, 18 July 2018 4:42:41 AM
> *To:* Stefan Kooman
> *Cc:* ceph-annou...@ceph.com; ceph-de...@vger.kernel.org;
> ceph-maintain...@ceph.com; ceph-us...@ceph.com
> *Subject:* Re: [ceph-users] v12.2.7 Luminous released
...@ceph.com; ceph-de...@vger.kernel.org;
ceph-maintain...@ceph.com; ceph-us...@ceph.com
Subject: Re: [ceph-users] v12.2.7 Luminous released
On Tue, 17 Jul 2018, Stefan Kooman wrote:
> Quoting Abhishek Lekshmanan (abhis...@suse.com):
>
> > *NOTE* The v12.2.5 release has a potential data corruptio
FIY,
I have updated some osdsĀ from 12.2.6 that was suffering from the CRC
error and the 12.2.7 fixed the issue!
I installed some new osds on 12/07 without being aware from the issue,
and in my small cluestes, I just noticed the problem when I was trying
to copy some RBD images to another po
On Tue, 17 Jul 2018, Stefan Kooman wrote:
> Quoting Abhishek Lekshmanan (abhis...@suse.com):
>
> > *NOTE* The v12.2.5 release has a potential data corruption issue with
> > erasure coded pools. If you ran v12.2.5 with erasure coding, please see
^^^
> > below.
>
> < snip >
>
>
Quoting Abhishek Lekshmanan (abhis...@suse.com):
> *NOTE* The v12.2.5 release has a potential data corruption issue with
> erasure coded pools. If you ran v12.2.5 with erasure coding, please see
> below.
< snip >
> Upgrading from v12.2.5 or v12.2.6
> -
>
> If you
This is the seventh bugfix release of Luminous v12.2.x long term
stable release series. This release contains several fixes for
regressions in the v12.2.6 and v12.2.5 releases. We recommend that
all users upgrade.
*NOTE* The v12.2.6 release has serious known regressions, while 12.2.6
wasn't for
13 matches
Mail list logo