Hi all,
We have a Ceph(version 12.2.4)cluster that adopts EC pools, and it consists
of 10 hosts for OSDs.
The corresponding commands to create the EC pool are listed as follows:
ceph osd erasure-code-profile set profile_jerasure_4_3_reed_sol_van \
plugin=jerasure \
k=4 \
m=3 \
techniqu
That seems like it. Thanks a lot Serkan!
On 26 Nov 2019 Tue at 20:08 Serkan Çoban wrote:
> Maybe following link helps...
> https://www.spinics.net/lists/dev-ceph/msg00795.html
>
> On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu
> wrote:
> >
> > I thought of that but it doesn't make much sense. AF
Maybe following link helps...
https://www.spinics.net/lists/dev-ceph/msg00795.html
On Tue, Nov 26, 2019 at 6:17 PM Erdem Agaoglu wrote:
>
> I thought of that but it doesn't make much sense. AFAICT min_size should
> block IO when i lose 3 osds, but it shouldn't effect the amount of the stored
>
Hi all,
I seem to be running into an issue when attempting to unlink a bucket from
a user; this is my output:
user@server ~ $ radosgw-admin bucket unlink --bucket=user_5493/LF-Store
--uid=user_5493
failure: 2019-11-26 15:19:48.689 7fda1c2009c0 0 bucket entry point user
mismatch, can't unlink buc
I thought of that but it doesn't make much sense. AFAICT min_size should
block IO when i lose 3 osds, but it shouldn't effect the amount of the
stored data. Am i missing something?
On Tue, Nov 26, 2019 at 6:04 AM Konstantin Shalygin wrote:
> On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
>
>
> What I
Nevermind, was not having discard='unmap' in libvirt
-Original Message-
To: ceph-users
Subject: [ceph-users] rbd lvm xfs fstrim vs rbd xfs fstrim
If I do an fstrim /mount/fs and this is an xsf directly on a rbd device.
I can see space being freed instantly with eg. rbd du. However