Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-20 Thread Sage Weil
Hi Marc, On Fri, 10 Nov 2017, Marc Roos wrote: > > osd's are crashing when putting a (8GB) file in a erasure coded pool, I take it you adjusted the osd_max_object_size option in your ceph.conf? We can "fix" this by enforcing a hard limit on that option, but that will just mean you get an er

Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-13 Thread Christian Wuerdig
think an osd should 'crash' in such situation. > 2. How else should I 'rados put' an 8GB file? > > > > > > > -Original Message- > From: Christian Wuerdig [mailto:christian.wuer...@gmail.com] > Sent: maandag 13 november 2017 0:12 > To: Marc

Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-13 Thread Marc Roos
.x86_64 python-cephfs-12.2.1-0.el7.x86_64 -Original Message- From: Caspar Smit [mailto:caspars...@supernas.eu] Sent: maandag 13 november 2017 9:58 To: ceph-users Subject: Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1 Hi, Why would Ceph 12.2.1 give you this message

Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-13 Thread Caspar Smit
Hi, Why would Ceph 12.2.1 give you this message: 2017-11-10 20:39:31.296101 7f840ad45e40 -1 WARNING: the following dangerous and experimental features are enabled: bluestore Or is that a leftover warning message from an old client? Kind regards, Caspar 2017-11-10 21:27 GMT+01:00 Marc Roos : >

Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-13 Thread Marc Roos
ect: Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1 As per: https://www.spinics.net/lists/ceph-devel/msg38686.html Bluestore as a hard 4GB object size limit On Sat, Nov 11, 2017 at 9:27 AM, Marc Roos wrote: > > osd's are crashing when putting a (8GB) file in a erasure coded

Re: [ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-12 Thread Christian Wuerdig
As per: https://www.spinics.net/lists/ceph-devel/msg38686.html Bluestore as a hard 4GB object size limit On Sat, Nov 11, 2017 at 9:27 AM, Marc Roos wrote: > > osd's are crashing when putting a (8GB) file in a erasure coded pool, > just before finishing. The same osd's are used for replicated poo

[ceph-users] Getting errors on erasure pool writes k=2, m=1

2017-11-10 Thread Marc Roos
osd's are crashing when putting a (8GB) file in a erasure coded pool, just before finishing. The same osd's are used for replicated pools rbd/cephfs, and seem to do fine. Did I made some error is this a bug? Looks similar to https://www.spinics.net/lists/ceph-devel/msg38685.html http://lists.c