Hi Burkhard,
I found my problem and it makes me feel like I need to slap myself awake now. I
will let you see my mistake.
What I had
client.libvirt
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd,
allow rwx pool=ssd
What I have now
client.libvirt
caps: [mon] al
Hi,
On 08/05/2015 05:54 PM, Pieter Koorts wrote:
Hi
I suspect something more sinister may be going on. I have set the
values (though smaller) on my cluster but the same issue happens. I
also find when the VM is trying to start there might be an IRQ flood
as processes like ksoftirqd seem to u
Hi
I suspect something more sinister may be going on. I have set the values
(though smaller) on my cluster but the same issue happens. I also find when the
VM is trying to start there might be an IRQ flood as processes like ksoftirqd
seem to use more CPU than they should.
Hi,
On 08/05/2015 03:09 PM, Pieter Koorts wrote:
Hi,
This is my OSD dump below
###
osc-mgmt-1:~$ sudo ceph osd dump | grep pool
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 128 pgp_num 128 last_change 43 lfor 43 flags
hashpspool t
Hi,This is my OSD dump below###osc-mgmt-1:~$ sudo ceph osd dump | grep poolpool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 43 lfor 43 flags hashpspool tiers 1 read_tier 1 write_tier 1 stripe_width 0pool 1 'ssd' re
Hi,
On 08/05/2015 02:54 PM, Pieter Koorts wrote:
Hi Burkhard,
I seemed to have missed that part but even though allowing access
(rwx) to the cache pool it still has a similar (not same) problem. The
VM process starts but it looks more like a dead or stuck process
trying forever to start and
Hi Burkhard,
I seemed to have missed that part but even though allowing access (rwx) to the
cache pool it still has a similar (not same) problem. The VM process starts but
it looks more like a dead or stuck process trying forever to start and has high
CPU (for the qemu-system-x86 process). Whe
Hi,
On 08/05/2015 02:13 PM, Pieter Koorts wrote:
Hi All,
This seems to be a weird issue. Firstly all deployment is done with
"ceph-deploy" and 3 host machines acting as MON and OSD using the
Hammer release on Ubuntu 14.04.3 and running KVM (libvirt).
When using vanilla CEPH, single rbd pool
Hi All,
This seems to be a weird issue. Firstly all deployment is done with
"ceph-deploy" and 3 host machines acting as MON and OSD using the Hammer
release on Ubuntu 14.04.3 and running KVM (libvirt).
When using vanilla CEPH, single rbd pool no log device or cache tiering, the
virtual machin