[ceph-users] Bluestore: tens of milliseconds latency in prepare stage

2022-08-09 Thread Xinying Song
Hi, everyone: I'm currently encountering a performance issue in Nautilus Bluestore. With the perf dump stats by mgr prometheus module, I found Bluestore will produce tens of milliseconds in prepare stage. The latency is calculated with l_bluestore_state_prepare_lat logger. The max latency I have s

[ceph-users] CephFS perforamnce degradation in root directory

2022-08-09 Thread Robert Sander
Hi, we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to a CloudStack system as primary storage. When copying a large file into the root directory of the CephFS the bandwidth drops from 500MB/s to 50MB/s after around 30 seconds. We see some MDS activity in the output of "

[ceph-users] Re: CephFS perforamnce degradation in root directory

2022-08-09 Thread Robert Sander
On 09.08.22 10:49, Anthony D'Atri wrote: What is your pool layout? Is it possble that your root directory is on, say, HDDs but the subdirectory is on SSDs? No, this is an SSD-only cluster. The subdirectory was freshly created and lies within the only data pool of this CephFS. Regards -- Ro

[ceph-users] Re: cephfs: num_stray growing without bounds (octopus)

2022-08-09 Thread Venky Shankar
Hi Frank, On Sun, Aug 7, 2022 at 6:46 PM Frank Schilder wrote: > > Hi Dhairya, > > I have some new results (below) and also some wishes as an operator that > might even help with the decision you mentioned in your e-mails: > > - Please implement both ways, a possibility to trigger an evaluation

[ceph-users] Re: Multi-active MDS cache pressure

2022-08-09 Thread Eugen Block
Hi, did you have some success with modifying the mentioned values? yes, the SUSE team helped identifying the issue, I can share the explanation: ---snip--- Every second (mds_cache_trim_interval config param) the mds is running "cache trim" procedure. One of the steps of this procedure is "r

[ceph-users] Re: Request for Info: bluestore_compression_mode?

2022-08-09 Thread Mark Nelson
Hi Frank, Thank you very much for the reply!  If you don't mind me asking, what's the use case?  We're trying to determine if we might be able to do compression at a higher level than blob with the eventual goal of simplifying the underlying data structures.  I actually had no idea that you

[ceph-users] Re: cephfs: num_stray growing without bounds (octopus)

2022-08-09 Thread Frank Schilder
Hi Venky. > FWIW, reintegration can be triggered with filesystem scrub on pacific Unfortunately, I will have to stay on octopus for a while. We are currently running mimic and the next step is octopus. This already a big enough thing due to the OSD conversion and the verification process I'm in

[ceph-users] Re: Ceph needs your help with defining availability!

2022-08-09 Thread John Bent
Hello Kamoltat, This sounds very interesting. Will you be sharing the results of the survey back with the community? Thanks, John On Sat, Aug 6, 2022 at 4:49 AM Kamoltat Sirivadhna wrote: > Hi everyone, > > One of the features we are looking into implementing for our upcoming Ceph > release (

[ceph-users] Re: Request for Info: bluestore_compression_mode?

2022-08-09 Thread Paul Mezzanini
We are using pool level compression (aggressive) for our large EC tier. Since we already had data in the pool when the feature was enabled I was unable to do in depth testing and tuning to get the best results. "Low hanging fruit" put 654T Under compression with 327T used. Not bad, but I know

[ceph-users] Re: Ceph needs your help with defining availability!

2022-08-09 Thread Kamoltat Sirivadhna
Hi John, Yes, I'm planning to summarize the results after this week. I will definitely share it with the community. Best, On Tue, Aug 9, 2022 at 1:19 PM John Bent wrote: > Hello Kamoltat, > > This sounds very interesting. Will you be sharing the results of the > survey back with the community?

[ceph-users] Re: Multi-active MDS cache pressure

2022-08-09 Thread Malte Stroem
Hello Eugen, thank you very much for the full explanation. This fixed our cluster and I am sure this helps a lot of people around the world since this is a problem occuring everywhere. I think this should be added to the documentation: https://docs.ceph.com/en/latest/cephfs/cache-configurati

[ceph-users] Re: CephFS: permissions of the .snap directory do not inherit ACLs

2022-08-09 Thread Patrick Donnelly
Hello Robert, On Wed, Aug 3, 2022 at 9:32 AM Robert Sander wrote: > > Hi, > > when using CephFS with POSIX ACLs I noticed that the .snap directory > does not inherit the ACLs from its parent but only the standard UNIX > permissions. > > This results in a permission denied error when users want to

[ceph-users] v15.2.17 Octopus released

2022-08-09 Thread David Galloway
We're happy to announce the 17th and final backport release in the Octopus series. For a detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/en/news/blog/2022/v15-2-17-RELEASE-released Notable Changes --- * Octopus modified the

[ceph-users] Re: Ceph debian/ubuntu packages build

2022-08-09 Thread David Galloway
On average, building Ubuntu packages takes about 1 to 1.5 hours on very powerful hardware. https://wiki.sepia.ceph.com/doku.php?id=hardware:braggi https://wiki.sepia.ceph.com/doku.php?id=hardware:adami It's a massive project and always been that way. On 8/9/22 20:31, Zhongzhou Cai wrote: Hi,

[ceph-users] Re: v15.2.17 Octopus released

2022-08-09 Thread Satoru Takeuchi
Hi, 2022年8月10日(水) 7:00 David Galloway : > > We're happy to announce the 17th and final backport release in the > Octopus series. For a detailed release notes with links & changelog > please refer to the official blog entry at > https://ceph.io/en/news/blog/2022/v15-2-17-RELEASE-released The link

[ceph-users] Re: v15.2.17 Octopus released

2022-08-09 Thread Laura Flores
Hey Satoru and others, Try this link: https://ceph.io/en/news/blog/2022/v15-2-17-octopus-released/ - Laura On Tue, Aug 9, 2022 at 7:44 PM Satoru Takeuchi wrote: > Hi, > > 2022年8月10日(水) 7:00 David Galloway : > > > > We're happy to announce the 17th and final backport release in the > > Octopus