[ceph-users] Re: Unbalanced OSDs when pg_autoscale enabled

2023-03-30 Thread
wr, 1.79k op/s rd, 2.34k op/s wrrecovery: 11 MiB/s, 7 objects/s progress:Global Recovery Event (0s) [....] 郑亮 于2023年3月16日周四 17:04写道: > Hi all, > > I have a 9 node cluster running *Pacific 16.2.10*. OSDs live on 9 of the > nodes with each one havin

[ceph-users] Unbalanced OSDs when pg_autoscale enabled

2023-03-16 Thread
Hi all, I have a 9 node cluster running *Pacific 16.2.10*. OSDs live on 9 of the nodes with each one having 4 x 1.8T ssd and 8 x 10.9T hdd for a total of 108 OSDs. We create three crush roots as belows. 1. The hdds (8x9=72) of all nodes form a large crush root, which is used as a data pool, and o

[ceph-users] Does cephfs subvolume have commands similar to `rbd perf` to query iops, bandwidth, and latency of rbd image?

2023-02-13 Thread
Hi all, Does cephfs subvolume have commands similar to rbd perf to query iops, bandwidth, and latency of rbd image? `ceph fs perf stats` shows metrics of the client side, not the metrics of the cephfs subvolume. What I want to get is the metrics at the subvolume level like below. [root@smd-expor

[ceph-users] Re: Removing OSD very slow (objects misplaced)

2022-12-25 Thread
Hi erich, You can reference following link: https://www.suse.com/support/kb/doc/?id=19693 Thanks, Liang Zheng E Taka <0eta...@gmail.com> 于2022年12月16日周五 01:52写道: > Hi, > > when removing some OSD with the command `ceph orch osd rm X`, the > rebalancing starts very fast, but after a while it a

[ceph-users] Re: Why is the disk usage much larger than the available space displayed by the `df` command after disabling ext4 journal?

2022-10-12 Thread
日周三 19:15写道: > On Wed, Oct 12, 2022 at 9:37 AM 郑亮 wrote: > > > > Hi all, > > I have create a pod using rbd image as backend storage, then map rbd > image > > to local block device, and mount it with ext4 filesystem. The `df` > > displays the disk usage

[ceph-users] Why is the disk usage much larger than the available space displayed by the `df` command after disabling ext4 journal?

2022-10-12 Thread
Hi all, I have create a pod using rbd image as backend storage, then map rbd image to local block device, and mount it with ext4 filesystem. The `df` displays the disk usage much larger than the available space displayed after disabling ext4 journal. The following is the steps to reproduce, thanks

[ceph-users] why rgw generates large quantities orphan objects?

2022-10-11 Thread
Hi all, Description of problem: [RGW] Buckets/objects deletion is causing large quantities orphan raods objects The cluster was running a cosbench workload, then remove the partial data by deleting objects from the cosbench client, then we have deleted all the buckets with the help of `s3cmd rb --