Hi all,
Just wondering if the original issue has been resolved. I have the same
problems with inconsistent nfs and samba directory listings. I'm running
Hammer.
Is it a confirmed seekdir bug in de kernel client?
On 01/14/2016 04:05 AM, Yan, Zheng wrote:
> On Thu, Jan 14, 2016 at 3:37 AM, Mike Ca
Thanks for this thread. We just did the same mistake (rmfailed) on our
hammer cluster which broke it similarly. The addfailed patch worked
for us too.
-- Dan
On Fri, Jan 15, 2016 at 6:30 AM, Mike Carlson wrote:
> Hey ceph-users,
>
> I wanted to follow up, Zheng's patch did the trick. We re-added
Hey ceph-users,
I wanted to follow up, Zheng's patch did the trick. We re-added the removed
mds, and it all came back. We're sync-ing our data off to a backup server.
Thanks for all of the help, Ceph has a great community to work with!
Mike C
On Thu, Jan 14, 2016 at 4:46 PM, Yan, Zheng wrote:
Do I apply this against the v9.2.0 git tag?
On Thu, Jan 14, 2016 at 4:48 PM, Dyweni - Ceph-Users <
6exbab4fy...@dyweni.com> wrote:
> Your patch lists the command as "addfailed" but the email lists the
> command as "add failed". (Note the space).
>
>
>
>
>
> On 2016-01-14 18:46, Yan, Zheng wrote:
Your patch lists the command as "addfailed" but the email lists the
command as "add failed". (Note the space).
On 2016-01-14 18:46, Yan, Zheng wrote:
Here is patch for v9.2.0. After install the modified version of
ceph-mon, run “ceph mds add failed 1”
On Jan 15, 2016, at 08:20, Mike
Here is patch for v9.2.0. After install the modified version of ceph-mon, run
“ceph mds add failed 1”
mds_addfailed.patch
Description: Binary data
> On Jan 15, 2016, at 08:20, Mike Carlson wrote:
>
> okay, that sounds really good.
>
> Would it help if you had access to our cluster?
>
>
On Fri, Jan 15, 2016 at 12:23 AM, Sage Weil wrote:
> On Fri, 15 Jan 2016, Yan, Zheng wrote:
>> > On Jan 15, 2016, at 08:16, Mike Carlson wrote:
>> >
>> > Did I just loose all of my data?
>> >
>> > If we were able to export the journal, could we create a brand new mds out
>> > of that and retriev
On Fri, 15 Jan 2016, Yan, Zheng wrote:
> > On Jan 15, 2016, at 08:16, Mike Carlson wrote:
> >
> > Did I just loose all of my data?
> >
> > If we were able to export the journal, could we create a brand new mds out
> > of that and retrieve our data?
>
> No. it’s early to fix. but you need to re
okay, that sounds really good.
Would it help if you had access to our cluster?
On Thu, Jan 14, 2016 at 4:19 PM, Yan, Zheng wrote:
>
> > On Jan 15, 2016, at 08:16, Mike Carlson wrote:
> >
> > Did I just loose all of my data?
> >
> > If we were able to export the journal, could we create a brand
> On Jan 15, 2016, at 08:16, Mike Carlson wrote:
>
> Did I just loose all of my data?
>
> If we were able to export the journal, could we create a brand new mds out of
> that and retrieve our data?
No. it’s early to fix. but you need to re-compile ceph-mon from source code.
I’m writing the p
Did I just loose all of my data?
If we were able to export the journal, could we create a brand new mds out
of that and retrieve our data?
On Thu, Jan 14, 2016 at 4:15 PM, Yan, Zheng wrote:
>
> > On Jan 15, 2016, at 08:01, Gregory Farnum wrote:
> >
> > On Thu, Jan 14, 2016 at 3:46 PM, Mike Car
> On Jan 15, 2016, at 08:01, Gregory Farnum wrote:
>
> On Thu, Jan 14, 2016 at 3:46 PM, Mike Carlson wrote:
>> Hey Zheng,
>>
>> I've been in the #ceph irc channel all day about this.
>>
>> We did that, we set max_mds back to 1, but, instead of stopping mds 1, we
>> did a "ceph mds rmfailed 1"
> On Jan 15, 2016, at 08:01, Gregory Farnum wrote:
>
> On Thu, Jan 14, 2016 at 3:46 PM, Mike Carlson wrote:
>> Hey Zheng,
>>
>> I've been in the #ceph irc channel all day about this.
>>
>> We did that, we set max_mds back to 1, but, instead of stopping mds 1, we
>> did a "ceph mds rmfailed 1"
On Thu, Jan 14, 2016 at 3:46 PM, Mike Carlson wrote:
> Hey Zheng,
>
> I've been in the #ceph irc channel all day about this.
>
> We did that, we set max_mds back to 1, but, instead of stopping mds 1, we
> did a "ceph mds rmfailed 1". Running ceph mds stop 1 produces:
>
> # ceph mds stop 1
> Error
Hey Zheng,
I've been in the #ceph irc channel all day about this.
We did that, we set max_mds back to 1, but, instead of stopping mds 1, we
did a "ceph mds rmfailed 1". Running ceph mds stop 1 produces:
# ceph mds stop 1
Error EEXIST: mds.1 not active (???)
Our mds in a state of resolve, and w
On Fri, Jan 15, 2016 at 3:28 AM, Mike Carlson wrote:
> Thank you for the reply Zheng
>
> We tried set mds bal frag to true, but the end result was less than
> desirable. All nfs and smb clients could no longer browse the share, they
> would hang on a directory with anything more than a few hundred
Thank you for the reply Zheng
We tried set mds bal frag to true, but the end result was less than
desirable. All nfs and smb clients could no longer browse the share, they
would hang on a directory with anything more than a few hundred files.
We then tried to back out the active/active mds change
On Thu, Jan 14, 2016 at 3:37 AM, Mike Carlson wrote:
> Hey Greg,
>
> The inconsistent view is only over nfs/smb on top of our /ceph mount.
>
> When I look directly on the /ceph mount (which is using the cephfs kernel
> module), everything looks fine
>
> It is possible that this issue just went unn
Hey Greg,
The inconsistent view is only over nfs/smb on top of our /ceph mount.
When I look directly on the /ceph mount (which is using the cephfs kernel
module), everything looks fine
It is possible that this issue just went unnoticed, and it only being a
infernalis problem is just a red herrin
On Wed, Jan 13, 2016 at 11:24 AM, Mike Carlson wrote:
> Hello.
>
> Since we upgraded to Infernalis last, we have noticed a severe problem with
> cephfs when we have it shared over Samba and NFS
>
> Directory listings are showing an inconsistent view of the files:
>
>
> $ ls /lts-mon/BD/xmlExport/
Hello.
Since we upgraded to Infernalis last, we have noticed a severe problem with
cephfs when we have it shared over Samba and NFS
Directory listings are showing an inconsistent view of the files:
$ ls /lts-mon/BD/xmlExport/ | wc -l
100
$ sudo umount /lts-mon
$ sudo mount /lts-mon
$ ls /l
21 matches
Mail list logo