Hi,
Did you edit the code before trying Luminous?
Yes, I'm still on jewel.
I also noticed from your > original mail that it appears you're using multiple
active metadata> servers? If so, that's not stable in Jewel. You may have tripped
on> one of many bugs fixed in Luminous for that conf
On Thu, Sep 28, 2017 at 5:16 AM, Micha Krause wrote:
> Hi,
>
> I had a chance to catch John Spray at the Ceph Day, and he suggested that I
> try to reproduce this bug in luminos.
Did you edit the code before trying Luminous? I also noticed from your
original mail that it appears you're using mult
On Thu, Sep 28, 2017 at 5:16 AM Micha Krause wrote:
> Hi,
>
> I had a chance to catch John Spray at the Ceph Day, and he suggested that
> I try to reproduce this bug in luminos.
>
> To fix my immediate problem we discussed 2 ideas:
>
> 1. Manually edit the Meta-data, unfortunately I was not able
Hi,
I had a chance to catch John Spray at the Ceph Day, and he suggested that I try
to reproduce this bug in luminos.
To fix my immediate problem we discussed 2 ideas:
1. Manually edit the Meta-data, unfortunately I was not able to find any
Information on how the meta-data is structured :-(
A serious problem of mds I think.
Anyone to fix it?
Regards.
On Thu, Sep 14, 2017 at 19:55 Micha Krause wrote:
> Hi,
>
> looking at the code, and running with debug mds = 10 it looks like I have
> an inode with negative link count.
>
> -2> 2017-09-14 13:28:39.249399 7f3919616700 10 mds.0.c
Hi,
looking at the code, and running with debug mds = 10 it looks like I have an
inode with negative link count.
-2> 2017-09-14 13:28:39.249399 7f3919616700 10 mds.0.cache.strays
eval_stray [dentry #100/stray7/17aa2f6 [2,head] auth (dversion lock) pv=0
v=23058565 inode=0x7f394b7e0730
Hi,
I was deleting a lot of hard linked files, when "something" happened.
Now my mds starts for a few seconds, writes a lot of these lines:
-43> 2017-09-06 13:51:43.396588 7f9047b21700 10 log_client will send
2017-09-06 13:51:40.531563 mds.0 10.210.32.12:6802/2735447218 4963 : cluster [ERR