Dear all,
I am using an erasure coded pool, and I get to a situation where I'm not
able to recover a PG. The OSDs that contain this PG keep crashing, on
the same behavior registered at http://tracker.ceph.com/issues/14154.
I'm using ceph 0.94.9 (it first appeared on 0.94.7, an upgrade didn't
Confirmed - older version of ceph-deploy is working fine. Odd as
there is a large number of Hammer users out there. Thank you for the
explanation and fix.
--
Alex Gorbachev
Storcium
On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni wrote:
> There is a known issue with latest ceph-deploy with *ham
Hi,
My (limited) understanding of this metadata heap pool is that it's an
archive of metadata entries and their versions. According to Yehuda,
this was intended to support recovery operations by reverting specific
metadata objects to a previous version. But nothing has been implemented
so far
Hi,
from the log file it looks like librbd.so doesn’t contain a specific entry
point that needs to be called. See my comment inline.
Have you upgraded the ceph client packages on the cinder node and on the nova
compute node? Or you just did the upgrade on the ceph nodes?
JC
> On Sep 9, 2016,
Hi,
I have deployed the Mirantis distribution of OpenStack Mitaka which comes with
Ceph Hammer, since I want to use keystone v3 with radosgw I added the Ubuntu
cloud archive for Mitaka on Trusty.
And then followed the upgrade instructions (had to remove the mos sources from
sources.list)
Anywa
There is a known issue with latest ceph-deploy with *hammer*, the
package split in later releases after *hammer* is the root cause,
If you use ceph-deploy 1.5.25 (older version) it will work. you can
get 1.5.25 from pypi
http://tracker.ceph.com/issues/17128
On Fri, Sep 9, 2016 at 8:28 AM, Shain M
On Fri, Sep 9, 2016 at 10:33 AM, Alexandre DERUMIER wrote:
> The main bottleneck with rbd currently, is cpu usage (limited to 1 iothread
> by disk)
Yes, definitely a bottleneck -- but you can bypass the librbd IO
dispatch thread by setting "rbd_non_blocking_aio = false" in your Ceph
client confi
Alex,
I ran into this issue yesterday as well.
I ended up just installing ceph via apt-get locally on the new server.
I have not been able to get an actual osd added to the cluster at this point
though (see my emails over the last 2 days or so).
Please let me know if you end up able to add an o
Does "rbd op threads = N" solve bottleneck? IMHO it is possible to make this
value automated by QEMU from num-queues. If now not.
Alexandre DERUMIER пишет:
> Hi,
>
> I'll test it next week to integrate it in proxmox.
>
> But I'm not sure I'll improve too much performance ,
>
> until qemu will
This problem seems to occur with the latest ceph-deploy version 1.5.35
[lab2-mon3][DEBUG ] Fetched 5,382 kB in 4s (1,093 kB/s)
[lab2-mon3][DEBUG ] Reading package lists...
[lab2-mon3][INFO ] Running command: env
DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get
--assume-yes -q --no-
A little extra context here. Currently the metadata pool looks like it is
on track to exceed the number of objects in the data pool, over time. In a
brand new cluster, we¹re already up to almost 2 million in each pool.
NAME ID USED %USED MAX AVAIL
OBJECTS
Any help on this is much appreciated, am considering to fix this, given it’s
confirmed an issue unless am missing something obvious.
Thanks,
-Pavan.
On 9/8/16, 5:04 PM, "ceph-users on behalf of Pavan Rallabhandi"
wrote:
Trying it one more time on the users list.
In our clusters
Hi,
I'll test it next week to integrate it in proxmox.
But I'm not sure I'll improve too much performance ,
until qemu will be able to use multiple iothread with multiple queue.
(I think that Paolo Bonzini still working on this currently).
The main bottleneck with rbd currently, is cpu usage (
Can someone please suggest a course of action moving forward?
I don't fee comfortable making changes to the crush map without a better
understanding of what exactly is going on here.
The new osd appears in the 'osd tree' but not in the current crush map. The
sever that hosts the osd is not pres
Hi all,
we are running a 144 osds ceph cluster and a couple of osd are >80% full.
This is the general situation:
osdmap e29344: 144 osds: 144 up, 144 in
pgmap v48302229: 42064 pgs, 18 pools, 60132 GB data, 15483 kobjects
173 TB used, 90238 GB / 261 TB avail
We are currenty m
Hi all,
After upgrade from firefly(0.80.7) to hammer(0.94.7), I am unable to list
objects in containers for radosgw swift user and I am able to list containers
for the same user.
I have created the user using
radosgw-admin user create --subuser=s3User:swiftUser --display-name="First
User" --ke
Hello Alexey,
thank you for your mail - my answers inline :)
Am 2016-09-08 16:24, schrieb Alexey Sheplyakov:
Hi,
root@:~# ceph-osd -i 12 --flush-journal
> SG_IO: questionable sense data, results may be incorrect
> SG_IO: questionable sense data, results may be incorrect
As far as I unders
Hi,
this is good for me:
ceph tell osd.* injectargs --osd_scrub_end_hour 7
ceph tell osd.* injectargs --osd_scrub_load_threshold 0.1
About the "(unchangeable)" warning, it seems to be a bug according:
http://tracker.ceph.com/issues/16054
Have a nice day.
D.
- Le 9 Sep 16, à 3:42, Christia
18 matches
Mail list logo