Hi All
We had a cluster (v13.2.4) with 32 osds in total. At first, an osd (osd.18)
in cluster was down. So, we tried to remove the osd and added a new one
(osd.32) with new ID. We unplugged the disk (osd.18) and plugged in a new
disk in the same slot and add osd.32 into cluster. Then, osd.32 was
b
Hi All
We had a cluster (v13.2.4) with 32 osds in total. At first, an osd (osd.18)
in cluster was down. So, we tried to remove the osd and added a new one
(osd.32) with new ID. We unplugged the disk (osd.18) and plugged in a new
disk in the same slot and add osd.32 into cluster. Then, osd.32 was
b
Understand.Thank you!
Best
Jerry
Igor Fedotov 於 2020年7月9日 週四 18:56 寫道:
> Hi Jerry,
>
> we haven't heard about frequent occurrences of this issue and the backport
> didn't look trivial hence we decided to omit it for M and L.
>
>
> Thanks,
>
> Igor
> On
Hi Igor
We are curious why blob garbage collection is not backported to mimic or
luminous?
https://github.com/ceph/ceph/pull/28229
Thanks
Jerry
Jerry Pu 於 2020年7月8日 週三 下午6:04寫道:
> OK. Thanks for your reminder. We will think about how to make the
> adjustment to our cluster.
>
>
OK. Thanks for your reminder. We will think about how to make the
adjustment to our cluster.
Best
Jerry Pu
Igor Fedotov 於 2020年7月8日 週三 下午5:40寫道:
> Please note that simple min_alloc_size downsizing might negatively impact
> OSD performance. That's why this modification has been pos
Thanks for your reply. It's helpful! We may consider to adjust
min_alloc_size to a lower value or take other actions based on
your analysis for space overhead with EC pools. Thanks.
Best
Jerry Pu
Igor Fedotov 於 2020年7月7日 週二 下午4:10寫道:
> I think you're facing the issue covered by
Hi:
We have a cluster (v13.2.4), and we do some tests on a EC k=2, m=1
pool "VMPool0". We deploy some VMs (Windows, CentOS7) on the pool and then
use IOMeter to write data to these VMs. After a period of time, we observe
a strange thing that pool actual usage is much larger than stored da