To Zheng Yan:
I'm wondering whether 'session reset' implies the below?
> On 18.10.2018, at 02:18, Alfredo Daniel Rezinovsky
> wrote:
>
>rados -p cephfs_metadata rm mds0_openfiles.0
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
In my case I was able to bring up the fs successfully after resetting
sessions+journal and scanning links using cephfs-data-scan tool.
> On 18.10.2018, at 02:18, Alfredo Daniel Rezinovsky
> wrote:
>
> Didnt work for me. Downgraded and mds won't start.
>
> I also needed to:
>
>rados -p c
Didnt work for me. Downgraded and mds won't start.
I also needed to:
rados -p cephfs_metadata rm mds0_openfiles.0
or else mds daemon crashed.
The crash info didn't show any useful information (for me). I couldn't
figure this out without Zheng Yan help.
On 17/10/18 17:36, Paul Emmerich w
The problem is caused by unintentional change of on-disk format of MDS purge
queue. If you have upgraded and didn't hit the bug -- that probably means your
MDS daemon was deployed after the upgrade, otherwise it wouldn't start.
> On 18.10.2018, at 02:08, Виталий Филиппов wrote:
>
> I mean, do
I mean, does every upgraded installation hit this bug, or do some upgrade
without any problem?
The problem occurs after upgrade, fresh 13.2.2 installs are not affected.
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list
ceph-users
The problem occurs after upgrade, fresh 13.2.2 installs are not affected.
> On 17.10.2018, at 23:42, Виталий Филиппов wrote:
>
> By the way, does it happen with all installations or only under some
> conditions?
>
>> CephFS will be offline and show up as "damaged" in ceph -s
>> The fix is to
By the way, does it happen with all installations or only under some
conditions?
CephFS will be offline and show up as "damaged" in ceph -s
The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired "
command.
Paul
--
With best regards,
Vitaliy Filippov
__
CephFS will be offline and show up as "damaged" in ceph -s
The fix is to downgrade to 13.2.1 and issue a "ceph fs repaired " command.
Paul
Am Mi., 17. Okt. 2018 um 21:53 Uhr schrieb Michael Sudnick
:
>
> What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with
> two active MD
What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with
two active MDS daemons and at least on the surface everything looks fine.
Is there anything I should avoid doing until 13.2.3?
On Wed, Oct 17, 2018, 14:10 Patrick Donnelly wrote:
> On Wed, Oct 17, 2018 at 11:05 AM Alexan
On Wed, Oct 17, 2018 at 11:05 AM Alexandre DERUMIER wrote:
>
> Hi,
>
> Is it possible to have more infos or announce about this problem ?
>
> I'm currently waiting to migrate from luminious to mimic, (I need new quota
> feature for cephfs)
>
> is it safe to upgrade to 13.2.2 ?
>
> or better to wa
Mail original -
De: "Patrick Donnelly"
À: "Zheng Yan"
Cc: "ceph-devel" , "ceph-users"
, ceph-annou...@lists.ceph.com
Envoyé: Lundi 8 Octobre 2018 18:50:59
Objet: Re: [ceph-users] Don't upgrade to 13.2.2 if you use cephfs
+ceph-announce
On Sun, Oct 7
+ceph-announce
On Sun, Oct 7, 2018 at 7:30 PM Yan, Zheng wrote:
> There is a bug in v13.2.2 mds, which causes decoding purge queue to
> fail. If mds is already in damaged state, please downgrade mds to
> 13.2.1, then run 'ceph mds repaired fs_name:damaged_rank' .
>
> Sorry for all the trouble I c
This is would be a question I had since Zheng posted the problem. I
recently purged a brand new cluster because I needed to change default
WAL/DB settings on all OSDs in collocate scenario. I decided to jump
to 13.2.2 rather then upgrade from 13.2.1. Now I wonder if I am still
in trouble.
Does this only affect upgraded CephFS deployments? A fresh 13.2.2
should work fine if I'm interpreting this bug correctly?
Paul
Am Mo., 8. Okt. 2018 um 11:53 Uhr schrieb Daniel Carrasco
:
>
>
>
> El lun., 8 oct. 2018 5:44, Yan, Zheng escribió:
>>
>> On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasc
El lun., 8 oct. 2018 5:44, Yan, Zheng escribió:
> On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco
> wrote:
> >
> > I've got several problems on 12.2.8 too. All my standby MDS uses a lot
> of memory (while active uses normal memory), and I'm receiving a lot of
> slow MDS messages (causing the web
On Mon, Oct 8, 2018 at 11:34 AM Daniel Carrasco wrote:
>
> I've got several problems on 12.2.8 too. All my standby MDS uses a lot of
> memory (while active uses normal memory), and I'm receiving a lot of slow MDS
> messages (causing the webpage to freeze and fail until MDS are restarted)...
> F
I've got several problems on 12.2.8 too. All my standby MDS uses a lot of
memory (while active uses normal memory), and I'm receiving a lot of slow
MDS messages (causing the webpage to freeze and fail until MDS are
restarted)... Finally I had to copy the entire site to DRBD and use NFS to
solve all
How is this not an emergency announcement? Also I wonder if I can
downgrade at all ? I am using ceph with docker deployed with
ceph-ansible. I wonder if I should push downgrade or basically wait for
the fix. I believe, a fix needs to be provided.
Thank you,
On 10/7/2018 9:30 PM, Yan, Zhen
There is a bug in v13.2.2 mds, which causes decoding purge queue to
fail. If mds is already in damaged state, please downgrade mds to
13.2.1, then run 'ceph mds repaired fs_name:damaged_rank' .
Sorry for all the trouble I caused.
Yan, Zheng
___
ceph-user
19 matches
Mail list logo