Do you know if this value is not set if it uses 4MB or 4096 bytes?

Thanks,
Robert LeBlanc

On Thu, Dec 18, 2014 at 6:51 PM, Tyler Wilson <k...@linuxdigital.net> wrote:
>
> Okay, this is rather unrelated to Ceph but I might as well mention how
> this is fixed. When using the Juno-Release OpenStack pages the
> 'rbd_store_chunk_size = 8' now sets it to 8192 bytes rather than 8192 kB
> (8MB) causing quite a bit more objects to be stored and deleted. Setting
> this to 8192 got me the expected object size of 8MB.
>
>
> On Thu, Dec 18, 2014 at 6:22 PM, Tyler Wilson <k...@linuxdigital.net>
> wrote:
>>
>> Hey All,
>>
>> On a new Cent7 deployment with firefly I'm noticing a strange behavior
>> when deleting RBD child disks. It appears upon deletion cpu usage on each
>> OSD process raises to about 75% for 30+ seconds. On my previous deployments
>> with CentOS 6.x and Ubuntu 12/14 this was never a problem.
>>
>> Each RBD Disk is 4GB created with 'rbd clone
>> images/136dd921-f6a2-432f-b4d6-e9902f71baa6@snap compute/test'
>>
>> ## Ubuntu12 3.11.0-18-generic with Ceph 0.80.7
>> root@node-1:~# date; rbd rm compute/test123; date
>> Fri Dec 19 01:09:31 UTC 2014
>> Removing image: 100% complete...done.
>> Fri Dec 19 01:09:31 UTC 2014
>>
>> ## Cent7 3.18.1-1.el7.elrepo.x86_64 with Ceph 0.80.7
>> [root@hvm003 ~]# date; rbd rm compute/test; date
>> Fri Dec 19 01:08:32 UTC 2014
>> Removing image: 100% complete...done.
>> Fri Dec 19 01:09:00 UTC 2014
>>
>> root@cpl001 ~]# ceph -s
>>     cluster d033718a-2cb9-409e-b968-34370bd67bd0
>>      health HEALTH_OK
>>      monmap e1: 3 mons at {cpl001=
>> 10.0.0.1:6789/0,mng001=10.0.0.3:6789/0,net001=10.0.0.2:6789/0}, election
>> epoch 10, quorum 0,1,2 cpl001,net001,mng001
>>      osdmap e84: 9 osds: 9 up, 9 in
>>       pgmap v618: 1792 pgs, 12 pools, 4148 MB data, 518 kobjects
>>             15106 MB used, 4257 GB / 4272 GB avail
>>                 1792 active+clean
>>
>>
>> Any assistance would be appreciated.
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to