Not sure if this is related to the dirlisting issue since the deep-scrubs
have always been way behind schedule.
But lets see if it has any effect to clear this warning. But it seems i can
only deep-scrub 5 pgs at a time. How can i increase this ?
On Wed, Oct 30, 2019 at 6:53 AM Lars Täuber wrote
Hi Kári,
what about this:
health: HEALTH_WARN
854 pgs not deep-scrubbed in time
maybe you should
$ ceph –cluster first pg scrub XX.YY
or
$ ceph –cluster first pg deep-scrub XX.YY
all the PGs.
Tue, 29 Oct 2019 22:43:28 +
Kári Bertilsson ==> Nathan Fish :
> I am encounter
I am encountering the dirlist hanging issue on multiple clients and none of
them are Ubuntu.
Debian buster running kernel 4.19.0-2-amd64. This one was working fine
until after ceph was upgraded to nautilus
Proxmox running kernels 5.0.21-1-pve and 5.0.18-1-pve
On Tue, Oct 29, 2019 at 9:04 PM Nath
Ubuntu's 4.15.0-66 has this bug, yes. -65 is safe and -67 will have the fix.
On Tue, Oct 29, 2019 at 4:54 PM Patrick Donnelly wrote:
>
> On Mon, Oct 28, 2019 at 11:33 PM Lars Täuber wrote:
> >
> > Hi!
> >
> > What kind of client (kernel vs. FUSE) do you use?
> > I experience a lot of the followi
On Mon, Oct 28, 2019 at 11:33 PM Lars Täuber wrote:
>
> Hi!
>
> What kind of client (kernel vs. FUSE) do you use?
> I experience a lot of the following problems with the most recent ubuntu
> 18.04.3 kernel 4.15.0-66-generic :
> kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache
I jumped the gun too quickly, dirlisting is still hanging with no entries
in `ceph osd blacklist ls`.
But when i restart the active MDS and the standby goes active dirlisting
finishes and i get 2 entries in blacklist with the IP address on the
previously active MDS.
On Tue, Oct 29, 2019 at 1:03 P
I am noticing i have many entries in `ceph osd blacklist ls` and dirlisting
works again after i removed all entries.
What can cause this and is there any way to disable blacklisting ?
On Tue, Oct 29, 2019 at 11:56 AM Kári Bertilsson
wrote:
> The file system was created on luminous and the proble
The file system was created on luminous and the problems started after
upgrading from luminous to nautilus.
All CephFS configuration should be pretty much default except i enabled
snapshots which was disabled by default on luminous.
On Tue, Oct 29, 2019 at 11:48 AM Kári Bertilsson
wrote:
> All c
All clients are using the kernel client on proxmox kernel
version 5.0.21-3-pve.
The mds logs are not showing anything interesting and have very little in
them except for the restarts, maybe i need to increase debug level ?
On Tue, Oct 29, 2019 at 6:33 AM Lars Täuber wrote:
> Hi!
>
> What kind o
Hi!
What kind of client (kernel vs. FUSE) do you use?
I experience a lot of the following problems with the most recent ubuntu
18.04.3 kernel 4.15.0-66-generic :
kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache but
object is from ceph_inode_info
Other clients with older ker
On Mon, Oct 28, 2019 at 12:17 PM Kári Bertilsson wrote:
>
> Hello Patrick,
>
> Here is output from those commands
> https://pastebin.com/yUmuQuYj
>
> 5 clients have the file system mounted, but only 2 of them have most of the
> activity.
Have you modified any CephFS configurations?
A copy of th
Any ideas or tips on how to debug further ?
On Mon, Oct 28, 2019 at 7:17 PM Kári Bertilsson
wrote:
> Hello Patrick,
>
> Here is output from those commands
> https://pastebin.com/yUmuQuYj
>
> 5 clients have the file system mounted, but only 2 of them have most of
> the activity.
>
>
>
> On Mon, O
Hello Patrick,
Here is output from those commands
https://pastebin.com/yUmuQuYj
5 clients have the file system mounted, but only 2 of them have most of the
activity.
On Mon, Oct 28, 2019 at 6:54 PM Patrick Donnelly
wrote:
> Hello Kári,
>
> On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson
>
Hello Kári,
On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson wrote:
> This seems to happen mostly when listing folders containing 10k+ folders.
>
> The dirlisting hangs indefinitely or until i restart the active MDS and then
> the hanging "ls" command will finish running.
>
> Every time restarti
14 matches
Mail list logo