Sorry bout that

It's all set now, i thought that was replica count as it is also 4 and 5 :)

I can see the changes now

[root@controller-node ~]# ceph osd dump | grep 'replicated size'
pool 4 'images' replicated size 2 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 1024 pgp_num 1024 last_change 19641 flags hashpspool
stripe_width 0
pool 5 'volumes' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 512 pgp_num 512 last_change 19640 flags hashpspool
stripe_width 0
[root@controller-node ~]#


To my other question, will it remove the excess replicas?

​/vlad

On Wed, Sep 7, 2016 at 8:51 AM, Jeff Bailey <bai...@cs.kent.edu> wrote:

>
>
> On 9/6/2016 8:41 PM, Vlad Blando wrote:
>
>> Hi,
>>
>> My replication count now is this
>>
>> [root@controller-node ~]# ceph osd lspools
>> 4 images,5 volumes,
>>
>
> Those aren't replica counts they're pool ids.
>
> [root@controller-node ~]#
>>
>> and I made adjustment and made it to 3 for images and 2 to volumes to 3,
>> it's been 30 mins now and the values did not change, how do I know if it
>> was really changed.
>>
>> this is the command I executed
>>
>>  ceph osd pool set images size 2
>>  ceph osd pool set volumes size 3
>>
>> ceph osd pool set images min_size 2
>> ceph osd pool set images min_size 2
>>
>>
>> Another question, since the previous replication count for images is 4
>> and volumes to 5, it will delete the excess replication right?
>>
>> Thanks for the help
>>
>>
>> /vlad
>> ᐧ
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

ᐧ
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to