[ceph-users] BUG 14154 on erasure coded PG

2016-09-09 Thread Gerd Jakobovitsch
Dear all, I am using an erasure coded pool, and I get to a situation where I'm not able to recover a PG. The OSDs that contain this PG keep crashing, on the same behavior registered at http://tracker.ceph.com/issues/14154. I'm using ceph 0.94.9 (it first appeared on 0.94.7, an upgrade didn't

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
Confirmed - older version of ceph-deploy is working fine. Odd as there is a large number of Hammer users out there. Thank you for the explanation and fix. -- Alex Gorbachev Storcium On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni wrote: > There is a known issue with latest ceph-deploy with *ham

Re: [ceph-users] rgw meta pool

2016-09-09 Thread Casey Bodley
Hi, My (limited) understanding of this metadata heap pool is that it's an archive of metadata entries and their versions. According to Yehuda, this was intended to support recovery operations by reverting specific metadata objects to a previous version. But nothing has been implemented so far

Re: [ceph-users] help on keystone v3 ceph.conf in Jewel

2016-09-09 Thread LOPEZ Jean-Charles
Hi, from the log file it looks like librbd.so doesn’t contain a specific entry point that needs to be called. See my comment inline. Have you upgraded the ceph client packages on the cinder node and on the nova compute node? Or you just did the upgrade on the ceph nodes? JC > On Sep 9, 2016,

[ceph-users] help on keystone v3 ceph.conf in Jewel

2016-09-09 Thread Robert Duncan
Hi, I have deployed the Mirantis distribution of OpenStack Mitaka which comes with Ceph Hammer, since I want to use keystone v3 with radosgw I added the Ubuntu cloud archive for Mitaka on Trusty. And then followed the upgrade instructions (had to remove the mos sources from sources.list) Anywa

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Vasu Kulkarni
There is a known issue with latest ceph-deploy with *hammer*, the package split in later releases after *hammer* is the root cause, If you use ceph-deploy 1.5.25 (older version) it will work. you can get 1.5.25 from pypi http://tracker.ceph.com/issues/17128 On Fri, Sep 9, 2016 at 8:28 AM, Shain M

Re: [ceph-users] virtio-blk multi-queue support and RBD devices?

2016-09-09 Thread Jason Dillaman
On Fri, Sep 9, 2016 at 10:33 AM, Alexandre DERUMIER wrote: > The main bottleneck with rbd currently, is cpu usage (limited to 1 iothread > by disk) Yes, definitely a bottleneck -- but you can bypass the librbd IO dispatch thread by setting "rbd_non_blocking_aio = false" in your Ceph client confi

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Shain Miley
Alex, I ran into this issue yesterday as well. I ended up just installing ceph via apt-get locally on the new server. I have not been able to get an actual osd added to the cluster at this point though (see my emails over the last 2 days or so). Please let me know if you end up able to add an o

Re: [ceph-users] virtio-blk multi-queue support and RBD devices?

2016-09-09 Thread Dzianis Kahanovich
Does "rbd op threads = N" solve bottleneck? IMHO it is possible to make this value automated by QEMU from num-queues. If now not. Alexandre DERUMIER пишет: > Hi, > > I'll test it next week to integrate it in proxmox. > > But I'm not sure I'll improve too much performance , > > until qemu will

[ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
This problem seems to occur with the latest ceph-deploy version 1.5.35 [lab2-mon3][DEBUG ] Fetched 5,382 kB in 4s (1,093 kB/s) [lab2-mon3][DEBUG ] Reading package lists... [lab2-mon3][INFO ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-

Re: [ceph-users] rgw meta pool

2016-09-09 Thread Warren Wang - ISD
A little extra context here. Currently the metadata pool looks like it is on track to exceed the number of objects in the data pool, over time. In a brand new cluster, we¹re already up to almost 2 million in each pool. NAME ID USED %USED MAX AVAIL OBJECTS

Re: [ceph-users] rgw meta pool

2016-09-09 Thread Pavan Rallabhandi
Any help on this is much appreciated, am considering to fix this, given it’s confirmed an issue unless am missing something obvious. Thanks, -Pavan. On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi" wrote: Trying it one more time on the users list. In our clusters

Re: [ceph-users] virtio-blk multi-queue support and RBD devices?

2016-09-09 Thread Alexandre DERUMIER
Hi, I'll test it next week to integrate it in proxmox. But I'm not sure I'll improve too much performance , until qemu will be able to use multiple iothread with multiple queue. (I think that Paolo Bonzini still working on this currently). The main bottleneck with rbd currently, is cpu usage (

Re: [ceph-users] Ceph-deploy not creating osd's

2016-09-09 Thread Shain Miley
Can someone please suggest a course of action moving forward? I don't fee comfortable making changes to the crush map without a better understanding of what exactly is going on here. The new osd appears in the 'osd tree' but not in the current crush map. The sever that hosts the osd is not pres

[ceph-users] osd reweight vs osd crush reweight

2016-09-09 Thread Simone Spinelli
Hi all, we are running a 144 osds ceph cluster and a couple of osd are >80% full. This is the general situation: osdmap e29344: 144 osds: 144 up, 144 in pgmap v48302229: 42064 pgs, 18 pools, 60132 GB data, 15483 kobjects 173 TB used, 90238 GB / 261 TB avail We are currenty m

[ceph-users] unauthorized to list radosgw swift container objects

2016-09-09 Thread B, Naga Venkata
Hi all, After upgrade from firefly(0.80.7) to hammer(0.94.7), I am unable to list objects in containers for radosgw swift user and I am able to list containers for the same user. I have created the user using radosgw-admin user create --subuser=s3User:swiftUser --display-name="First User" --ke

Re: [ceph-users] Jewel 10.2.2 - Error when flushing journal

2016-09-09 Thread Mehmet
Hello Alexey, thank you for your mail - my answers inline :) Am 2016-09-08 16:24, schrieb Alexey Sheplyakov: Hi, root@:~# ceph-osd -i 12 --flush-journal > SG_IO: questionable sense data, results may be incorrect > SG_IO: questionable sense data, results may be incorrect As far as I unders

Re: [ceph-users] non-effective new deep scrub interval

2016-09-09 Thread David DELON
Hi, this is good for me: ceph tell osd.* injectargs --osd_scrub_end_hour 7 ceph tell osd.* injectargs --osd_scrub_load_threshold 0.1 About the "(unchangeable)" warning, it seems to be a bug according: http://tracker.ceph.com/issues/16054 Have a nice day. D. - Le 9 Sep 16, à 3:42, Christia