wr, 1.79k op/s rd, 2.34k op/s wrrecovery: 11 MiB/s, 7
objects/s progress:Global Recovery Event (0s)
[....]
郑亮 于2023年3月16日周四 17:04写道:
> Hi all,
>
> I have a 9 node cluster running *Pacific 16.2.10*. OSDs live on 9 of the
> nodes with each one havin
Hi all,
I have a 9 node cluster running *Pacific 16.2.10*. OSDs live on 9 of the
nodes with each one having 4 x 1.8T ssd and 8 x 10.9T hdd for a total of
108 OSDs. We create three crush roots as belows.
1. The hdds (8x9=72) of all nodes form a large crush root, which is used as
a data pool, and o
Hi all,
Does cephfs subvolume have commands similar to rbd perf to query iops,
bandwidth, and latency of rbd image? `ceph fs perf stats` shows metrics of
the client side, not the metrics of the cephfs subvolume. What I want to
get is the metrics at the subvolume level like below.
[root@smd-expor
Hi erich,
You can reference following link:
https://www.suse.com/support/kb/doc/?id=19693
Thanks,
Liang Zheng
E Taka <0eta...@gmail.com> 于2022年12月16日周五 01:52写道:
> Hi,
>
> when removing some OSD with the command `ceph orch osd rm X`, the
> rebalancing starts very fast, but after a while it a
日周三 19:15写道:
> On Wed, Oct 12, 2022 at 9:37 AM 郑亮 wrote:
> >
> > Hi all,
> > I have create a pod using rbd image as backend storage, then map rbd
> image
> > to local block device, and mount it with ext4 filesystem. The `df`
> > displays the disk usage
Hi all,
I have create a pod using rbd image as backend storage, then map rbd image
to local block device, and mount it with ext4 filesystem. The `df`
displays the disk usage much larger than the available space displayed
after disabling ext4 journal. The following is the steps to reproduce,
thanks
Hi all,
Description of problem: [RGW] Buckets/objects deletion is causing large
quantities orphan raods objects
The cluster was running a cosbench workload, then remove the partial data
by deleting objects from the cosbench client, then we have deleted all the
buckets with the help of `s3cmd rb --