Michael, Loic,

I'm having similar issues with certain (k,m) combinations when storing
objects larger than 4MB. Since this thread is relatively old I was
wondering if anyone had fixed it or at least has identified this bug.
Any news?

Lluis

On Mon, Mar 31, 2014 at 5:08 PM, Michael Nelson <mn+ceph-us...@tnld.net> wrote:
>
>
> On Mon, 31 Mar 2014, Michael Nelson wrote:
>
>> Hi Loic,
>>
>> On Sun, 30 Mar 2014, Loic Dachary wrote:
>>
>>> Hi Michael,
>>>
>>> I'm trying to reproduce the problem from sources (today's instead of
>>> yesterday's but there is no difference that could explain the behaviour you
>>> have):
>>>
>>> cd src
>>> rm -fr /tmp/dev /tmp/out ;  mkdir -p /tmp/dev ; CEPH_DIR=/tmp LC_ALL=C
>>> MON=1 OSD=6 bash -x ./vstart.sh -d -n -X -l mon osd
>>> ceph osd erasure-code-profile set profile33 ruleset-failure-domain=osd
>>> k=3 m=3
>>> ceph osd crush rule create-erasure ecruleset33 profile33
>>> ceph osd pool create testec-33 20 20 erasure profile33 ecruleset33
>>> ./rados --pool testec-33 put SOMETHING  /etc/group
>>>
>>> but it succeeds. Could you please script a minimal set of commands I
>>> could run to repeat the problem you're seeing ?
>>
>>
>> The file that I put into the pool was 15MB in size and the error occurred
>> after the first 4MB chunk. That might be the difference.
>>
>> I will try to come up with a list of commands in case that doesn't trigger
>> it for you.
>
>
> Here is a concise version of what I am deploying with (limited to 6 OSDs in
> this example).
>
> ceph-deploy new c1 c2 c3
> pdsh -wc[1-3] yum -y install ceph
> ceph-deploy mon create c1 c2 c3
> sleep 30
>
> cd /etc/ceph
> ceph-deploy gatherkeys c1 c2 c3
>
> machines=(c1 c2 c3)
>
> for machine in ${machines[@]}; do
>         for n in {1..2}; do
>                 ssh root@$machine 'rm -rf /data$n/osd$n && mkdir -p
> /data$n/osd$n'
>                 ssh root@$machine 'rm -rf /journals/journal-osd$n'
>                 ceph-deploy osd prepare
> $machine:/data$n/osd$n:/journals/journal-osd$n
>                 ceph-deploy osd activate
> $machine:/data$n/osd$n:/journals/journal-osd$n
>         done
> done
>
> # Wait for OSDs to settle
> sleep 60
>
>
> ceph osd erasure-code-profile set profile33 ruleset-failure-domain=osd k=3
> m=3
> ceph osd crush rule create-erasure ecruleset33 profile33
> ceph osd pool create ectest-33 20 20 erasure profile33 ecruleset33
>
> root@c1:~/ceph# rados -p ectest-33 put xyz sampledata
> error putting ectest-33/xyz: Operation not supported
>
> root@c1:~/ceph# du -h sampledata
> 15M     sampledata
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Lluís Pàmies i Juárez
http://lluis.pamies.cat
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to