I jumped the gun too quickly, dirlisting is still hanging with no entries
in `ceph osd blacklist ls`.

But when i restart the active MDS and the standby goes active dirlisting
finishes and i get 2 entries in blacklist with the IP address on the
previously active MDS.

On Tue, Oct 29, 2019 at 1:03 PM Kári Bertilsson <karibert...@gmail.com>
wrote:

> I am noticing i have many entries in `ceph osd blacklist ls` and
> dirlisting works again after i removed all entries.
> What can cause this and is there any way to disable blacklisting ?
>
> On Tue, Oct 29, 2019 at 11:56 AM Kári Bertilsson <karibert...@gmail.com>
> wrote:
>
>> The file system was created on luminous and the problems started after
>> upgrading from luminous to nautilus.
>> All CephFS configuration should be pretty much default except i enabled
>> snapshots which was disabled by default on luminous.
>>
>> On Tue, Oct 29, 2019 at 11:48 AM Kári Bertilsson <karibert...@gmail.com>
>> wrote:
>>
>>> All clients are using the kernel client on proxmox kernel
>>> version 5.0.21-3-pve.
>>>
>>> The mds logs are not showing anything interesting and have very little
>>> in them except for the restarts, maybe i need to increase debug level ?
>>>
>>> On Tue, Oct 29, 2019 at 6:33 AM Lars Täuber <taeu...@bbaw.de> wrote:
>>>
>>>> Hi!
>>>>
>>>> What kind of client (kernel vs. FUSE) do you use?
>>>> I experience a lot of the following problems with the most recent
>>>> ubuntu 18.04.3 kernel 4.15.0-66-generic :
>>>> kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache
>>>> but object is from ceph_inode_info
>>>>
>>>> Other clients with older kernels (e.g. 4.15.0-47-generic) work without
>>>> interruption on the same CephFS.
>>>>
>>>>
>>>> Lars
>>>>
>>>>
>>>> Mon, 28 Oct 2019 22:10:25 +0000
>>>> Kári Bertilsson <karibert...@gmail.com> ==> Patrick Donnelly <
>>>> pdonn...@redhat.com> :
>>>> > Any ideas or tips on how to debug further ?
>>>> >
>>>> > On Mon, Oct 28, 2019 at 7:17 PM Kári Bertilsson <
>>>> karibert...@gmail.com>
>>>> > wrote:
>>>> >
>>>> > > Hello Patrick,
>>>> > >
>>>> > > Here is output from those commands
>>>> > > https://pastebin.com/yUmuQuYj
>>>> > >
>>>> > > 5 clients have the file system mounted, but only 2 of them have
>>>> most of
>>>> > > the activity.
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Mon, Oct 28, 2019 at 6:54 PM Patrick Donnelly <
>>>> pdonn...@redhat.com>
>>>> > > wrote:
>>>> > >
>>>> > >> Hello Kári,
>>>> > >>
>>>> > >> On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson <
>>>> karibert...@gmail.com>
>>>> > >> wrote:
>>>> > >> > This seems to happen mostly when listing folders containing
>>>> 10k+
>>>> > >> folders.
>>>> > >> >
>>>> > >> > The dirlisting hangs indefinitely or until i restart the active
>>>> MDS and
>>>> > >> then the hanging "ls" command will finish running.
>>>> > >> >
>>>> > >> > Every time restarting the active MDS fixes the problem for a
>>>> while.
>>>> > >>
>>>> > >> Please share details about your cluster. `fs dump`, `ceph status`,
>>>> and
>>>> > >> `ceph versions`. How many clients are using the file system?
>>>> > >>
>>>> > >> --
>>>> > >> Patrick Donnelly, Ph.D.
>>>> > >> He / Him / His
>>>> > >> Senior Software Engineer
>>>> > >> Red Hat Sunnyvale, CA
>>>> > >> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>>>> > >>
>>>> > >>
>>>>
>>>>
>>>> --
>>>>                             Informationstechnologie
>>>> Berlin-Brandenburgische Akademie der Wissenschaften
>>>> Jägerstraße 22-23                      10117 Berlin
>>>> Tel.: +49 30 20370-352           http://www.bbaw.de
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@ceph.io
>>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>>>
>>>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to