[ceph-users] Ceph per-user stats?

2016-12-23 Thread Henrik Korkuc
Hello, I wondered if Ceph can emit stats (via perf counters, statsd or in some other way) IO and bandwidth stats per Ceph user? I was unable to find such stats. I know that we can get at least some of these stats from RGW, but I'd like to have something like that for RBD and CephFS. Example u

[ceph-users] ceph keystone integration

2016-12-23 Thread Tadas
Hello, I'm currently trying to integrate ceph to Openstack object storage. Everything works fine, - i can use object storage from Openstack part, upload, and download files, etc. The problem is that ceph fails to query keystone for revoked tokens with error: 2016-12-23 12:01:30.972648 7f5aaf7d670

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-22 18:09 GMT+01:00 Wido den Hollander : > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein < > cont...@stephane-klein.info>: > > > > > > Hi, > > > > I have this status: > > > > bash-4.2# ceph status > > cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac > > health HEALTH_WARN > >

Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
Very interesting documentation about this subject is here: http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/ 2016-12-22 12:26 GMT+01:00 Stéphane Klein : > Hi, > > I have: > > * 3 mon > * 3 osd > > When I shutdown one osd, I work great: > > cluster 7ecb6ebd-2e7a-44c3-bf

Re: [ceph-users] Ceph pg active+clean+inconsistent

2016-12-23 Thread Brad Hubbard
Could you also try this? $ attr -l ./DIR_1/DIR_F/DIR_3/DIR_9/DIR_8/1000187bb70.0009__head_EED893F1__6 Take note of any of ceph._, ceph._@1, ceph._@2, etc. For me on my test cluster it looks like this. $ attr -l dev/osd1/current/0.3_head/benchmark\\udata\\urskikr.localdomain\\u16952\\uobjec

Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
2016-12-23 2:17 GMT+01:00 Jie Wang : > OPTION(mon_osd_min_down_reporters, OPT_INT, 2) // number of OSDs from > different subtrees who need to report a down OSD for it to count > > Yes, it is that: # ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-1.asok config show | grep "repor" "mon_o

[ceph-users] Why mon_osd_min_down_reporters isn't set to 1 like the default value in documentation? It is a bug?

2016-12-23 Thread Stéphane Klein
Hi, in documentation http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/, I see: * mon osd min down reporters Description:The minimum number of Ceph OSD Daemons required to report a down Ceph OSD Daemon. Type:32-bit Integer Default:1 I have used https://github.c

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Wido den Hollander
> Op 23 december 2016 om 10:31 schreef Stéphane Klein > : > > > 2016-12-22 18:09 GMT+01:00 Wido den Hollander : > > > > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein < > > cont...@stephane-klein.info>: > > > > > > > > > Hi, > > > > > > I have this status: > > > > > > bash-4.2# ceph s

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 11:35 GMT+01:00 Wido den Hollander : > > > Op 23 december 2016 om 10:31 schreef Stéphane Klein < > cont...@stephane-klein.info>: > > > > > > 2016-12-22 18:09 GMT+01:00 Wido den Hollander : > > > > > > > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein < > > > cont...@stephane-kle

[ceph-users] Why I don't see "mon osd min down reports" in "config show" report result?

2016-12-23 Thread Stéphane Klein
Hi, when I execute: ``` root@ceph-mon-1:/home/vagrant# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-1.asok config show | grep "down" "mon_osd_adjust_down_out_interval": "true", "mon_osd_down_out_interval": "300", "mon_osd_down_out_subtree_limit": "rack", "mon_pg_check_down_

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Henrik Korkuc
On 16-12-23 12:43, Stéphane Klein wrote: 2016-12-23 11:35 GMT+01:00 Wido den Hollander >: > Op 23 december 2016 om 10:31 schreef Stéphane Klein mailto:cont...@stephane-klein.info>>: > > > 2016-12-22 18:09 GMT+01:00 Wido den Hollander mailto:w...@42on.

Re: [ceph-users] Why I don't see "mon osd min down reports" in "config show" report result?

2016-12-23 Thread Craig Chi
Hi Stéphane Klein, Two mail threads sent from you are similar situations. 1. Just a reminder, if you are using Jewel, you should look for the page of jewel in the URL. For example, you should seehttp://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/instead of hammer. 2. The

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 13:03 GMT+01:00 Henrik Korkuc : > On 16-12-23 12:43, Stéphane Klein wrote: > > > 2016-12-23 11:35 GMT+01:00 Wido den Hollander : > >> >> > Op 23 december 2016 om 10:31 schreef Stéphane Klein < >> cont...@stephane-klein.info>: >> > >> > >> > 2016-12-22 18:09 GMT+01:00 Wido den Hollander

Re: [ceph-users] BlueStore with v11.1.0 Kraken

2016-12-23 Thread Wido den Hollander
> Op 22 december 2016 om 14:36 schreef Eugen Leitl : > > > Hi guys, > > I'm building a first test cluster for homelab, and would like to start > using BlueStore since data loss is not critical. However, there are > obviously no official documentation for basic best usage online yet. > True, s

Re: [ceph-users] Clone data inconsistency in hammer

2016-12-23 Thread Bartłomiej Święcki
Hi, I used kraken 11.1.1 from official deb repo which has the mentioned patch merged in, worked without problems. For reference, here are the steps I made to fix the cluster: 1) Setup ceph client with newest kraken version, ensure it can connect to the cluster 2) Get the broken image id:

Re: [ceph-users] BlueStore with v11.1.0 Kraken

2016-12-23 Thread Eugen Leitl
Hi Wido, thanks for your comments. On Fri, Dec 23, 2016 at 02:00:44PM +0100, Wido den Hollander wrote: > > My original layout was using 2x single Xeon nodes with 24 GB RAM each > > under Proxmox VE for the test application and two metadata servers, > > each as a VM guest. Each VM woud be about

Re: [ceph-users] rgw leaking data, orphan search loop

2016-12-23 Thread Wido den Hollander
> Op 22 december 2016 om 19:00 schreef Orit Wasserman : > > > HI Maruis, > > On Thu, Dec 22, 2016 at 12:00 PM, Marius Vaitiekunas > wrote: > > On Thu, Dec 22, 2016 at 11:58 AM, Marius Vaitiekunas > > wrote: > >> > >> Hi, > >> > >> 1) I've written before into mailing list, but one more time. W

[ceph-users] Atomic Operations?

2016-12-23 Thread Kent Borg
Hello, a newbie here! Doing some playing with Python and librados, and it is mostly easy to use, but I am confused about atomic operations. The documentation isn't clear to me, and Google isn't giving me obvious answers either... I would like to do some locking. The data structures I am playi

Re: [ceph-users] Ceph pg active+clean+inconsistent

2016-12-23 Thread Shinobu Kinjo
Plus do this as well: # rados list-inconsistent-obj ${PG ID} On Fri, Dec 23, 2016 at 7:08 PM, Brad Hubbard wrote: > Could you also try this? > > $ attr -l > ./DIR_1/DIR_F/DIR_3/DIR_9/DIR_8/1000187bb70.0009__head_EED893F1__6 > > Take note of any of ceph._, ceph._@1, ceph._@2, etc. > > For m