Hi Burkhard,

I found my problem and it makes me feel like I need to slap myself awake now. I 
will let you see my mistake.

What I had
client.libvirt
  caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd, 
allow rwx pool=ssd

What I have now
client.libvirt
  caps: [mon] allow r
  caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd, 
allow rwx pool=ssd

Silly me forgot to give read to MON. I had it there but accidentally erased it.

Thanks for the help anyway

Pieter

On Aug 05, 2015, at 05:08 PM, Burkhard Linke 
<burkhard.li...@computational.bio.uni-giessen.de> wrote:

Hi,

On 08/05/2015 05:54 PM, Pieter Koorts wrote:
Hi

I suspect something more sinister may be going on. I have set the values 
(though smaller) on my cluster but the same issue happens. I also find when the 
VM is trying to start there might be an IRQ flood as processes like ksoftirqd 
seem to use more CPU than they should.

####################
pool 1 'ssd' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins 
pg_num 128 pgp_num 128 last_change 60 flags hashpspool,incomplete_clones 
tier_of 0 cache_mode writeback target_bytes 120000000000 target_objects 1000000 
hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1800s 
x1 stripe_width 0
####################

You can check whether the cache pool operates correctly by using the ceph admin 
user and the rbd command line tool or qemu-img to create some objects in the 
pools, e.g.

qemu-img -p <hdd pool> create test 1G

rbd -p <hdd pool> import <some file>

(not sure about the correct syntax...)

If this is working correctly the pool setup is fine.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to