Hi Marc,
On Fri, 10 Nov 2017, Marc Roos wrote:
>
> osd's are crashing when putting a (8GB) file in a erasure coded pool,
I take it you adjusted the osd_max_object_size option in your ceph.conf?
We can "fix" this by enforcing a hard limit on that option, but that
will just mean you get an er
think an osd should 'crash' in such situation.
> 2. How else should I 'rados put' an 8GB file?
>
>
>
>
>
>
> -Original Message-
> From: Christian Wuerdig [mailto:christian.wuer...@gmail.com]
> Sent: maandag 13 november 2017 0:12
> To: Marc
.x86_64
python-cephfs-12.2.1-0.el7.x86_64
-Original Message-
From: Caspar Smit [mailto:caspars...@supernas.eu]
Sent: maandag 13 november 2017 9:58
To: ceph-users
Subject: Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1
Hi,
Why would Ceph 12.2.1 give you this message
Hi,
Why would Ceph 12.2.1 give you this message:
2017-11-10 20:39:31.296101 7f840ad45e40 -1 WARNING: the following
dangerous and experimental features are enabled: bluestore
Or is that a leftover warning message from an old client?
Kind regards,
Caspar
2017-11-10 21:27 GMT+01:00 Marc Roos :
>
ect: Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1
As per: https://www.spinics.net/lists/ceph-devel/msg38686.html
Bluestore as a hard 4GB object size limit
On Sat, Nov 11, 2017 at 9:27 AM, Marc Roos
wrote:
>
> osd's are crashing when putting a (8GB) file in a erasure coded
As per: https://www.spinics.net/lists/ceph-devel/msg38686.html
Bluestore as a hard 4GB object size limit
On Sat, Nov 11, 2017 at 9:27 AM, Marc Roos wrote:
>
> osd's are crashing when putting a (8GB) file in a erasure coded pool,
> just before finishing. The same osd's are used for replicated poo
osd's are crashing when putting a (8GB) file in a erasure coded pool,
just before finishing. The same osd's are used for replicated pools
rbd/cephfs, and seem to do fine. Did I made some error is this a bug?
Looks similar to
https://www.spinics.net/lists/ceph-devel/msg38685.html
http://lists.c