I built my rpms by building from source after the cherry-picking the
commit listed.
On 10/05/17 13:12, Jurian Broertjes wrote:
I'm having issues with this as well. Since no new dev build is
available yet, I tried the gitbuilder route, but that seems to be
outdated.
eg: http://gitbuilder.ceph.c
If you are on the current release of Ceph Hammer 0.94.10 or Jewel 10.2.7,
you have it already. I don't remember which release it came out in, but
it's definitely in the current releases..
On Thu, May 11, 2017, 12:24 AM Anton Dmitriev wrote:
> "recent enough version of the ceph-objectstore-tool"
I`m on Jewel 10.2.7
Do you mean this:
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-${osd_num}
--journal-path /var/lib/ceph/osd/ceph-${osd_num}/journal
--log-file=/var/log/ceph/objectstore_tool.${osd_num}.log --op
apply-layout-settings --pool default.rgw.buckets.data --debug
?
And
I honestly haven't investigated the command line structure that it would
need, but that looks about what I'd expect.
On Thu, May 11, 2017, 7:58 AM Anton Dmitriev wrote:
> I`m on Jewel 10.2.7
> Do you mean this:
> ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-${osd_num}
> --journal-pat
Hi Jason,
it seems i can at least circumvent the crashes. Since i restarted ALL
osds after enabling exclusive lock and rebuilding the object maps it had
no new crashes.
What still makes me wonder are those
librbd::object_map::InvalidateRequest: 0x7f7860004410 should_complete: r=0
messages.
Gree
Seeing some odd behaviour while testing using rados bench. This is on
a pre-split pool, two node cluster with 12 OSDs total.
ceph osd pool create newerpoolofhopes 2048 2048 replicated ""
replicated_ruleset 5
rados -p newerpoolofhopes bench -t 32 -b 2 3000 write --no-cleanup
Using
Assuming the only log messages you are seeing are the following:
2017-05-06 03:20:50.830626 7f7876a64700 -1
librbd::object_map::InvalidateRequest: 0x7f7860004410 invalidating
object map in-memory
2017-05-06 03:20:50.830634 7f7876a64700 -1
librbd::object_map::InvalidateRequest: 0x7f7860004410 inval
It seems that there's some bottleneck is blocking the I/O, when the bottleneck
is reached, I/O is blocked and curve goes down, when it is released, I/O
resumes and the curve gose up.
-邮件原件-
发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 Patrick Dinnen
发送时间: 2017年5月12日 3:47
It actually seems like these values aren't being honored, i actually see
many more objects being processed by gc (as well as kraken object
lifecycle), even though my values are at the default 32 objs.
19:52:44 root@<> /var/run/ceph $ ceph --admin-daemon
/var/run/ceph/ceph-client.<>.asok config sho