[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-13 Thread Stolte, Felix
Hi Jeff, thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section of ganesha.conf and upgraded ganesha to 3.2 this weekend. Cache Pressure warnings still keep accuring, but not as frequent as before. Is there another suggestion I did miss? Regards Felix --

[ceph-users] How to use ceph-volume raw ?

2020-04-13 Thread hoannv46
My cluster on 15.2.1 versions. I use ceph-volume raw to add new osd but it has error : ceph-volume raw prepare --bluestore --data /dev/vdb Running command: /usr/bin/ceph-authtool --gen-print-key Running co

[ceph-users] Re: Nautilus upgrade causes spike in MDS latency

2020-04-13 Thread Josh Haft
On Mon, Apr 13, 2020 at 4:14 PM Gregory Farnum wrote: > > On Mon, Apr 13, 2020 at 1:33 PM Josh Haft wrote: > > > > Hi, > > > > I upgraded from 13.2.5 to 14.2.6 last week and am now seeing > > significantly higher latency on various MDS operations. For example, > > the 2min rate of ceph_mds_server

[ceph-users] Re: Nautilus upgrade causes spike in MDS latency

2020-04-13 Thread Gregory Farnum
On Mon, Apr 13, 2020 at 1:33 PM Josh Haft wrote: > > Hi, > > I upgraded from 13.2.5 to 14.2.6 last week and am now seeing > significantly higher latency on various MDS operations. For example, > the 2min rate of ceph_mds_server_req_create_latency_sum / > ceph_mds_server_req_create_latency_count fo

[ceph-users] Nautilus upgrade causes spike in MDS latency

2020-04-13 Thread Josh Haft
Hi, I upgraded from 13.2.5 to 14.2.6 last week and am now seeing significantly higher latency on various MDS operations. For example, the 2min rate of ceph_mds_server_req_create_latency_sum / ceph_mds_server_req_create_latency_count for an 8hr window last Monday prior to the upgrade was an average

[ceph-users] Re: [Octopus] OSD overloading

2020-04-13 Thread Xiaoxi Chen
I am not sure if any change in Octopus make this worse, but we are in Nautilus also seeing the RocksDB overhead during snaptrim is huge, we walk around by throttling the snaptrim speed to minimal as well as throttle deep-scurb, see https://www.spinics.net/lists/dev-ceph/msg01277.html for detail

[ceph-users] Re: Check if upmap is supported by client?

2020-04-13 Thread Frank Schilder
Hi Paul, thanks for the fast reply. When you say "bit 21", do you mean "(feature_map & 2^21) == true" (i.e., counting from 0 starting at the right-hand end)? Then it would be set for all connected ceph fs clients. Assuming upmap is supported by all clients. If I understand correctly, to use th

[ceph-users] Re: Check if upmap is supported by client?

2020-04-13 Thread Paul Emmerich
bit 21 in the features bitmap is upmap support Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Apr 13, 2020 at 11:53 AM Frank Schilder wrote: > > Dear all, > >

[ceph-users] Re: Fwd: Question on rbd maps

2020-04-13 Thread Ilya Dryomov
Tying this with your other thread, if you always take a lock before mapping an image, you could just list the lockers. Unlike a watch, a lock will never disappear behind your back ;) Thanks, Ilya On Thu, Apr 9, 2020 at 9:24 PM Void Star Nill wrote: > > Thanks Ilya. Is there a m

[ceph-users] Re: Fwd: question on rbd locks

2020-04-13 Thread Ilya Dryomov
As Paul said, a lock is typically broken by a new client trying to grab it. As part of that the existing lock holder needs to be blacklisted, unless you fence using some type of STONITH. The question of whether the existing lock holder is dead can't be answered in isolation. For example, the exc

[ceph-users] Re: [Octopus] OSD overloading

2020-04-13 Thread Igor Fedotov
Given the symptoms high CPU usage within RocksDB and corresponding slowdown were presumably caused by RocksDB fragmentation. And temporary workaround would be to do manual DB compaction using  ceph-kvstore-tool's compact command. Thanks, Igor On 4/13/2020 1:01 AM, Jack wrote: Yep I am Th

[ceph-users] Check if upmap is supported by client?

2020-04-13 Thread Frank Schilder
Dear all, I would like to enable the balancer on a mimic 13.2.8 cluster in upmap mode. Unfortunately, I have a lot of ceph fs kernel clients that report their version as jewel, but might already support upmap. The ceph client kernel module received already a lot of back-ports and supports featu