Hi,
I have two ceph clusters on 16.2.13 and one CephFS in each.
I can see only one difference between FSs: the second one FS has two data pools
and one of the root directories is pinned to the second pool. In the first one
FS we have only one default data pool.
And, when I do:
ceph fs authorize
Oh, Josh, thanks! It's very helpful!
On 18.07.2024, 20:51, "Joshua Baergen" mailto:jbaer...@digitalocean.com>> wrote:
Hey Aleksandr,
> In the Pacific we have RocksDB column families. It will be helpful in the
> case of many tombstones to do resharding of our old OSDs?
> Do you think It can h
Josh, thanks!
I will read more about LSM in RocksDB, thanks!
Can I ask last one question)
We have a lot of "old" SSD OSDs in the index pool which were deployed before
Pacific.
In the Pacific we have RocksDB column families. It will be helpful in the case
of many tombstones to do resharding of
Why only offline compaction can help in our case?
On 17.07.2024, 16:14, "Joshua Baergen" mailto:jbaer...@digitalocean.com>> wrote:
Hey Aleksandr,
rocksdb_delete_range_threshold has had some downsides in the past (I
don't have a reference handy) so I don't recommend changin
h can be
> used as a condition for manual compacting?
There is no tombstone counter that I'm aware of, which is really what
you need in order to trigger compaction when appropriate.
Josh
On Tue, Jul 16, 2024 at 9:12 AM Rudenko Aleksandr mailto:arude...@croc.ru>> wrote:
>
> H
acker.ceph.com/issues/58440>
[2] https://docs.ceph.com/en/latest/releases/pacific/#v16-2-14-pacific
<https://docs.ceph.com/en/latest/releases/pacific/#v16-2-14-pacific>
[3] https://github.com/ceph/ceph/pull/50932
<https://github.com/ceph/ceph/pull/50932>
- Le 16 Juil 24, à 17:1
Hi,
We have a big Ceph cluster (RGW case) with a lot of big buckets 10-500M objects
with 31-1024 shards and a lot of io generated by many clients.
Index pool placed on enterprise SSDs. We have about 120 SSDs (replication 3)
and about 90Gb of OMAP data on each drive.
About 75 PGs on each SSD for
Hi,
I have setup with one default tenant and next user/bucket structure:
user1
bucket1
bucket11
user2
bucket2
user3
bucket3
IAM and STS APIs are enabled, user1 has roles=* capabilities.
When user1 permit user2 to assume role with n
Hi guys,
In perf dump of RGW instance I have two similar sections.
First one:
"objecter": {
"op_active": 0,
"op_laggy": 0,
"op_send": 38816,
"op_send_bytes": 199927218,
"op_resend": 0,
"op_reply": 38816,
"oplen_avg": {
"avgc
OSD uses sysfs device parameter "rotational" for detecting device type
(HDD/SSD).
You can see it:
ceph osd metadata {osd_id}
On 05.04.2022, 11:49, "Richard Bade" wrote:
Hi Frank, yes I changed the device class to HDD but there seems to be some
smarts in the background that apply the
Hi everyone.
I try to understand conception of RBD group snapshots, but I can’t.
Regular snaps (not a group one) we can use like regular RBD images. We can
export it or use it directly in qemu-img or we can create new image based on
snap (clone).
But if we talk about group snaps – we can’t do any
11 matches
Mail list logo