Hi David,
Redeploying 2 out of 3 MONs a few weeks back (to have them using RocksDB to be
ready for Quincy) prevented some clients from connecting to the cluster and
mounting cephfs volumes.
Before the redeploy, these clients were using port 6789 (v1) explicitly as
connections wouldn't work wit
Hi Frédéric,
The curiosity of Albert's cluster is that (msgr) v1 and v2 are present on
the mons, as well as on the osds backend.
But v2 is absent on the public OSD and MDS network
The specific point is that the public network has been changed.
At first, I thought it was the order of declaration
Hi,
> On 17 Jul 2024, at 10:21, Frédéric Nass
> wrote:
>
> Seems like msgr v2 activation did only occur after all 3 MONs were redeployed
> and used RocksDB. Not sure why this happened though.
For a work with msgr v2 only, you need to specify ms_mode to prefer-crc, at
least. For example of fst
Hi Frédéric,
Thank you for your reply!
We can't face spillover because we use dedicated ssd OSDs (no slow device)
which are mapped to index pool in our RGW deployment:
ceph tell osd.60 perf dump bluefs | grep slow
"slow_total_bytes": 0,
"slow_used_bytes": 0,
"bytes_writt
Hi Josh,
Thank you for your reply!
It was helpful for me, now I understand that I can't measure rocksdb
degradation using program metric (
In our version (16.2.13) we have this code (with new option
rocksdb_cf_compact_on_deletion). We will try using it. As I understand,
tombstones in the case
Hi,
Trying to figure out what is the trigger time for scrubbing and garbage
collection but in the config related to these operations are not
straightforward.
I'm still looking for my daily laggy pg with slow ops around the same time and
what I've found that some cleaning triggered around that
Turns out it was all actually working fine with the addition of the
extra_container_args and the /etc/pki mount. I was running the "radosgw-admin
sync status " in a cephadm shell which did not have the certificates in, and it
seems the check was getting blocked.
Switching into a radosgw-admin
Nice, great that it works for you. And thanks for the update.
Zitat von "Alex Hussein-Kershaw (HE/HIM)" :
Turns out it was all actually working fine with the addition of the
extra_container_args and the /etc/pki mount. I was running the
"radosgw-admin sync status " in a cephadm shell which
Hi,
as far as I know these endpoints are only for multisite replication
purposes. You can set just one endpoint pointing to haproxy with
multiple RGW behind it.
You can create separate RGWs with disabled sync thread which will serve
real users. It could make them more responsive. Lookup rgw_
Hey Aleksandr,
rocksdb_delete_range_threshold has had some downsides in the past (I
don't have a reference handy) so I don't recommend changing it.
> As I understand, tombstones in the case of RGW it's only deletions of
> objects, right?
It can also happen due to bucket reshards, as this will d
Hello,
I'm testing multisite sync on reef 18.2.2, cephadm and ubuntu 22.04.
Right now I'm testing symmetrical sync policy making backup to read-only
zone.
My sync policy allows for replication and I enable replication via
put-bucket-replication.
My multisite setup fails at seemingly basic o
Hi,
It would seem that the order of declaration of mons addresses (v2 then v1
and not the other way around) is important.
Albert restarted all services after this modification and everything is
back to normal
Le mer. 17 juil. 2024 à 09:40, David C. a écrit :
> Hi Frédéric,
>
> The curiosity o
Le 17/07/2024 à 09:40:59+0200, David C. a écrit
Hi everyone.
>
> The curiosity of Albert's cluster is that (msgr) v1 and v2 are present on the
> mons, as well as on the osds backend.
>
> But v2 is absent on the public OSD and MDS network
>
> The specific point is that the public network has be
- Le 17 Juil 24, à 15:53, Albert Shih albert.s...@obspm.fr a écrit :
> Le 17/07/2024 à 09:40:59+0200, David C. a écrit
> Hi everyone.
>
>>
>> The curiosity of Albert's cluster is that (msgr) v1 and v2 are present on the
>> mons, as well as on the osds backend.
>>
>> But v2 is absent on th
In the official guide I found this example to mount Cephfs:
mount -t ceph name@.fs_name=/ /mnt/mycephfs -o mon_addr=1.2.3.4
How to adapt to part "name@.fs_name=/" to my setup. Which command do I
have to run in cephadm shell to pick up the details?
___
Hi,
I'm about setting up a Ceph stage cluster using `cephadm` and `podman`.
As I'm not allowed to connect to the outside, our Artifactory keeps the needed
quay.io packages mirrored. What I did:
* put a config `/etc/containers/registries.conf.d` for `prefix=quay.io` on the
first node
* created a
On Wednesday, July 17, 2024 2:03:58 PM EDT Marianne Spiller wrote:
> Hi,
>
> I'm about setting up a Ceph stage cluster using `cephadm` and `podman`.
> As I'm not allowed to connect to the outside, our Artifactory keeps the
> needed quay.io packages mirrored. What I did:
>
> * put a config `/etc/c
Hey all,
The upcoming community Ceph container images will be based on CentOS 9.
In our Clyso CI testing lab we learned that el9-based images won't run
on some (default) qemu VMs. Where our el8-based images run well, our
new el9-based images get:
error during bootstrap: Fatal glibc error: CP
Hi,
"name" is the client name as who you're trying to mount the
filesystem, "fs_name" is the name of your CephFS. You can run 'ceph fs
ls' to see which filesystems are present. And then you need the path
you want to mount, in this example it's the root directory "/".
Regards,
Eugen
Zitat
Thanks a lot for the heads up, Dan!
Zitat von Dan van der Ster :
Hey all,
The upcoming community Ceph container images will be based on CentOS 9.
In our Clyso CI testing lab we learned that el9-based images won't run
on some (default) qemu VMs. Where our el8-based images run well, our
new el9
20 matches
Mail list logo