On Fri, May 17, 2024 at 11:52 AM Nicola Mori wrote:
> Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should
> be the current version and which is affected. Will the fix be included
> in the next Reef release?
>
Yes, it's already merged to the reef branch, and should be availabl
Hi,
~6K log segments to be trimmed, that's huge.
1. Are there any custom configs configured on this setup ?
2. Is subtree pinning enabled ?
3. Are there any warnings w.r.t rados slowness ?
4. Please share the mds perf dump to check for latencies and other stuff.
$ceph tell mds. perf dump
Than
Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should
be the current version and which is affected. Will the fix be included
in the next Reef release?
Cheers,
Nicola
smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-us
Hi Nicola,
Yes, this issue is already fixed in main [1] and the quincy backport is
still pending to be merged. Hopefully will be available
in the next Quincy release.
[1] https://github.com/ceph/ceph/pull/48027
[2] https://github.com/ceph/ceph/pull/54469
Thanks and Regards,
Kotresh H R
On We
Hi,
We are using rook-ceph with operator 1.10.8 and ceph 17.2.5.
we are using ceph filesystem with 4 mds i.e 2 active & 2 standby MDS
every 3-4 weeks filesystem is having issue i.e in ceph status we can see
below warnings warnings :
2 MDS reports slow requests
2 MDS Behind on Trimming
mds.myfs-a(
It's unfortunately more complicated than that. I don't think that
forward scrub tag gets persisted to the raw objects; it's just a
notation for you. And even if it was, it would only be on the first
object in every file — larger files would have many more objects
forward scrub doesn't touch.
This
If using jumbo frames, also ensure that they're consistently enabled on all OS
instances and network devices.
> On May 16, 2024, at 09:30, Frank Schilder wrote:
>
> This is a long shot: if you are using octopus, you might be hit by this
> pglog-dup problem:
> https://docs.clyso.com/blog/osds
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
At least for the current up-to-date reef branch (not sure what reef version
you're on) when --image is not provided to the shell, it should try to
infer the image in this order
1. from the CEPHADM_IMAGE env. variable
2. if you pass --name with a daemon name to the shell command, it will
t
On 5/16/24 17:50, Robert Sander wrote:
cephadm osd activate HOST
would re-activate the OSDs.
Small but important typo: It's
ceph cephadm osd activate HOST
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-
Hi,
On 5/16/24 17:44, Matthew Vernon wrote:
cephadm --image docker-registry.wikimedia.org/ceph shell
...but is there a good way to arrange for cephadm to use the
already-downloaded image without having to remember to specify --image
each time?
You could create a shell alias:
alias cephsh
Hi,
I've some experience with Ceph, but haven't used cephadm much before,
and am trying to configure a pair of reef clusters with cephadm. A
couple of newbie questions, if I may:
* cephadm shell image
I'm in an isolated environment, so pulling from a local repository. I
bootstrapped OK with
This is a long shot: if you are using octopus, you might be hit by this
pglog-dup problem: https://docs.clyso.com/blog/osds-with-unlimited-ram-growth/.
They don't mention slow peering explicitly in the blog, but its also a
consequence because the up+acting OSDs need to go through the PG_log duri
Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy :
>
> Hello Community,
> In addition, we've 3+ Gbps links and the average object size is 200
> kilobytes. So the utilization is about 300 Mbps to ~ 1.8 Gbps and not more
> than that.
> We seem to saturate the link when the secondary zone fetches big
14 matches
Mail list logo