You could try creating Subvolumes as well:
https://docs.ceph.com/en/latest/cephfs/fs-volumes/
As usual, ceph caps and data layout semantics apply to Subvolumes as well.
On Thu, Dec 22, 2022 at 8:19 PM Jonas Schwab <
jonas.sch...@physik.uni-wuerzburg.de> wrote:
> Hello everyone,
>
> I would like
@MarcThanks Marc,FIO is executed and the result attached to this email. But
what is consuming me is: tell bench sometimes return for example 2~10 and some
times return 170~200.If the disk is burned out why sometimes return higher
value?Currently this OSD is weighted 0. So there are no any load
@MarcThanks Marc,I am executing your profile via fio. I will send you the
result. But what is consuming me is: tell bench sometimes return for example
2~10 and some times return 170~200.If the disk is burned out why sometimes
return higher value?Currently this OSD is weighted 0. So there are no
>
> In my cluster, there are several OSDs of type ordinary SSD with very
> slow iops.
I think there have been several posts here about ordinary ssd's becoming slow
under specific conditions. Why do you think your 'ordinary ssds' do not have
this?
What does fio say about these disks?
I think th
Hi experts.In one of my ceph cluster, some of my OSDs has dramatically slow
IOPs when executing tell bench command. All of the OSDs are SSD and we have 2
types of SSD disks: 1) ordinary SSD 2) Enterprise SSDs
In my cluster, there are several OSDs of type ordinary SSD with very slow
iops.The resu