Hi, everyone:
I'm currently encountering a performance issue in Nautilus Bluestore.
With the perf dump stats by mgr prometheus module, I found Bluestore
will produce tens of milliseconds in prepare stage. The latency is
calculated with l_bluestore_state_prepare_lat logger. The max latency
I have s
Hi,
we have a cluster with 7 nodes each with 10 SSD OSDs providing CephFS to
a CloudStack system as primary storage.
When copying a large file into the root directory of the CephFS the
bandwidth drops from 500MB/s to 50MB/s after around 30 seconds. We see
some MDS activity in the output of "
On 09.08.22 10:49, Anthony D'Atri wrote:
What is your pool layout?
Is it possble that your root directory is on, say, HDDs but the subdirectory is
on SSDs?
No, this is an SSD-only cluster. The subdirectory was freshly created
and lies within the only data pool of this CephFS.
Regards
--
Ro
Hi Frank,
On Sun, Aug 7, 2022 at 6:46 PM Frank Schilder wrote:
>
> Hi Dhairya,
>
> I have some new results (below) and also some wishes as an operator that
> might even help with the decision you mentioned in your e-mails:
>
> - Please implement both ways, a possibility to trigger an evaluation
Hi,
did you have some success with modifying the mentioned values?
yes, the SUSE team helped identifying the issue, I can share the explanation:
---snip---
Every second (mds_cache_trim_interval config param) the mds is running
"cache trim" procedure. One of the steps of this procedure is "r
Hi Frank,
Thank you very much for the reply! If you don't mind me asking, what's
the use case? We're trying to determine if we might be able to do
compression at a higher level than blob with the eventual goal of
simplifying the underlying data structures. I actually had no idea that
you
Hi Venky.
> FWIW, reintegration can be triggered with filesystem scrub on pacific
Unfortunately, I will have to stay on octopus for a while. We are currently
running mimic and the next step is octopus. This already a big enough thing due
to the OSD conversion and the verification process I'm in
Hello Kamoltat,
This sounds very interesting. Will you be sharing the results of the survey
back with the community?
Thanks,
John
On Sat, Aug 6, 2022 at 4:49 AM Kamoltat Sirivadhna
wrote:
> Hi everyone,
>
> One of the features we are looking into implementing for our upcoming Ceph
> release (
We are using pool level compression (aggressive) for our large EC tier. Since
we already had data in the pool when the feature was enabled I was unable to do
in depth testing and tuning to get the best results. "Low hanging fruit" put
654T Under compression with 327T used. Not bad, but I know
Hi John,
Yes, I'm planning to summarize the results after this week. I will
definitely share it with the community.
Best,
On Tue, Aug 9, 2022 at 1:19 PM John Bent wrote:
> Hello Kamoltat,
>
> This sounds very interesting. Will you be sharing the results of the
> survey back with the community?
Hello Eugen,
thank you very much for the full explanation.
This fixed our cluster and I am sure this helps a lot of people around
the world since this is a problem occuring everywhere.
I think this should be added to the documentation:
https://docs.ceph.com/en/latest/cephfs/cache-configurati
Hello Robert,
On Wed, Aug 3, 2022 at 9:32 AM Robert Sander
wrote:
>
> Hi,
>
> when using CephFS with POSIX ACLs I noticed that the .snap directory
> does not inherit the ACLs from its parent but only the standard UNIX
> permissions.
>
> This results in a permission denied error when users want to
We're happy to announce the 17th and final backport release in the
Octopus series. For a detailed release notes with links & changelog
please refer to the official blog entry at
https://ceph.io/en/news/blog/2022/v15-2-17-RELEASE-released
Notable Changes
---
* Octopus modified the
On average, building Ubuntu packages takes about 1 to 1.5 hours on very
powerful hardware.
https://wiki.sepia.ceph.com/doku.php?id=hardware:braggi
https://wiki.sepia.ceph.com/doku.php?id=hardware:adami
It's a massive project and always been that way.
On 8/9/22 20:31, Zhongzhou Cai wrote:
Hi,
Hi,
2022年8月10日(水) 7:00 David Galloway :
>
> We're happy to announce the 17th and final backport release in the
> Octopus series. For a detailed release notes with links & changelog
> please refer to the official blog entry at
> https://ceph.io/en/news/blog/2022/v15-2-17-RELEASE-released
The link
Hey Satoru and others,
Try this link:
https://ceph.io/en/news/blog/2022/v15-2-17-octopus-released/
- Laura
On Tue, Aug 9, 2022 at 7:44 PM Satoru Takeuchi
wrote:
> Hi,
>
> 2022年8月10日(水) 7:00 David Galloway :
> >
> > We're happy to announce the 17th and final backport release in the
> > Octopus
16 matches
Mail list logo