Re: [ceph-users] PG stuck peering after host reboot

2017-02-21 Thread Wido den Hollander
> Op 20 februari 2017 om 17:52 schreef george.vasilaka...@stfc.ac.uk: > > > Hi Wido, > > Just to make sure I have everything straight, > > > If the PG still doesn't recover do the same on osd.307 as I think that > > 'ceph pg X query' still hangs? > > > The info from ceph-objectstore-tool mig

Re: [ceph-users] How safe is ceph pg repair these days?

2017-02-21 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Gregory Farnum > Sent: 20 February 2017 22:13 > To: Nick Fisk ; David Zafman > Cc: ceph-users > Subject: Re: [ceph-users] How safe is ceph pg repair these days? > > On Sat, Feb 18, 2017 at 1

Re: [ceph-users] PG stuck peering after host reboot

2017-02-21 Thread george.vasilakakos
> Can you for the sake of redundancy post your sequence of commands you > executed and their output? [root@ceph-sn852 ~]# systemctl stop ceph-osd@307 [root@ceph-sn852 ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-307 --op info --pgid 1.323 PG '1.323' not found [root@ceph-sn852 ~]#

[ceph-users] Radosgw's swift api return 403, and user cann't be removed.

2017-02-21 Thread choury
Hi all, I create a user to test swift api like this: { "user_id": "test", "display_name": "test", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [ { "id": "test:swift", "permissions": "full-control" }

Re: [ceph-users] radosgw-admin bucket link: empty bucket instance id

2017-02-21 Thread Valery Tschopp
Hi, I've the same problem about 'radosgw-admin bucket link --bucket XXX --uid YYY', but with a Jewel radosgw The admin rest API [1] do not work either :( Any idea? [1]: http://docs.ceph.com/docs/master/radosgw/adminops/#link-bucket On 28/01/16 17:03 , Wido den Hollander wrote: Hi, I'm tr

[ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Rhian Resnick
Good morning, We are currently investigating using Ceph for a KVM farm, block storage and possibly file systems (cephfs with ceph-fuse, and ceph hadoop). Our cluster will be composed of 4 nodes, ~240 OSD's, and 4 monitors providing mon and mds as required. What experience has the community h

Re: [ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Logan Kuhn
We have a very similar configuration at one point. I was fairly new when we started to move away from it, but what happened to us is that anytime a directory needed to stat, backup, ls, rsync, etc. It would take minutes to return and while it was waiting CPU load would spike due to iowait. The

Re: [ceph-users] PG stuck peering after host reboot

2017-02-21 Thread george.vasilakakos
I have noticed something odd with the ceph-objectstore-tool command: It always reports PG X not found even on healthly OSDs/PGs. The 'list' op works on both and unhealthy PGs. From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of george.vasil

Re: [ceph-users] CephFS : double objects in 2 pools

2017-02-21 Thread John Spray
On Tue, Feb 21, 2017 at 5:20 PM, Florent B wrote: > Hi everyone, > > I use a Ceph Jewel cluster. > > I have a CephFS with some directories at root, on which I defined some > layouts : > > # getfattr -n ceph.dir.layout maildata1/ > # file: maildata1/ > ceph.dir.layout="stripe_unit=1048576 stripe_co

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-21 Thread Samuel Just
Ok, I've added explicit support for osd_snap_trim_sleep (same param, new non-blocking implementation) to that branch. Care to take it for a whirl? -Sam On Thu, Feb 9, 2017 at 11:36 AM, Nick Fisk wrote: > Building now > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Beha

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-21 Thread Jason Dillaman
On Mon, Feb 20, 2017 at 10:13 PM, Zhongyan Gu wrote: > You mentioned the fix is scheduled to be included in Hammer 0.94.10, Is > there any fix already there?? The fix for that specific diff issue is included in the hammer branch [1][2] -- but 0.94.10 hasn't been released yet. [1] http://tracker.

Re: [ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Rhian Resnick
Logan, Thank you for the feedback. Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431 Phone 561.297.2647 Fax 561.297.0222 [image]

Re: [ceph-users] radosgw-admin bucket link: empty bucket instance id

2017-02-21 Thread Casey Bodley
When it complains about a missing bucket instance id, that's what it's expecting to get from the --bucket-id argument. That's the "id" field shown in bucket stats. Try this? $ radosgw-admin bucket link --bucket=XXX --bucket-id=YYY --uid=ZZZ Casey On 02/21/2017 08:30 AM, Valery Tschopp wrote:

Re: [ceph-users] RADOSGW S3 api ACLs

2017-02-21 Thread Andrew Bibby
Josef, A co-maintainer of the radula project forwarded this message to me. Our little project started specifically to address the handling of ACLs of uploaded objects through the S3 api, but has since grown to include other nice-to-haves. We noted that it was possible to upload objects to a buck

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-21 Thread Nick Fisk
Yep sure, will try and present some figures at tomorrow’s meeting again. From: Samuel Just [mailto:sj...@redhat.com] Sent: 21 February 2017 18:14 To: Nick Fisk Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep? Ok, I've added explicit

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-21 Thread Nick Fisk
> > On 02/19/2017 12:15 PM, Patrick Donnelly wrote: > > On Sat, Feb 18, 2017 at 2:55 PM, Noah Watkins > wrote: > >> The least intrusive solution is to simply change the sandbox to allow > >> the standard file system module loading function as expected. Then > >> any user would need to make sure t

Re: [ceph-users] Passing LUA script via python rados execute

2017-02-21 Thread Patrick Donnelly
On Tue, Feb 21, 2017 at 4:45 PM, Nick Fisk wrote: > I'm trying to put some examples together for a book and so wanted to try and > come up with a more out of the box experience someone could follow. I'm > guessing some basic examples in LUA and then come custom rados classes in C++ > might be t

Re: [ceph-users] help with crush rule

2017-02-21 Thread Brian Andrus
I don't think a CRUSH rule exception is currently possible, but it makes sense to me for a feature request. On Sat, Feb 18, 2017 at 6:16 AM, Maged Mokhtar wrote: > > Hi, > > I have a need to support a small cluster with 3 hosts and 3 replicas given > that in normal operation each replica will be

Re: [ceph-users] How safe is ceph pg repair these days?

2017-02-21 Thread David Zafman
Nick, Yes, as you would expect a read error would not be used as a source for repair no matter which OSD(s) are getting read errors. David On 2/21/17 12:38 AM, Nick Fisk wrote: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gregory F

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-21 Thread Zhongyan Gu
Well, we have already included this fix in our test setup. I think this time we encountered another potential bug in the export process. we are diving into the code and trying the easy-reproduce-case. On Wed, Feb 22, 2017 at 2:28 AM, Jason Dillaman wrote: > On Mon, Feb 20, 2017 at 10:13 PM, Zhon

Re: [ceph-users] Rbd export-diff bug? rbd export-diff generates different incremental files

2017-02-21 Thread Jason Dillaman
On Tue, Feb 21, 2017 at 8:28 PM, Zhongyan Gu wrote: > Well, we have already included this fix in our test setup. I think this time > we encountered another potential bug in the export process. we are diving > into the code and trying the easy-reproduce-case. Even you know you can eventually repro

[ceph-users] Having many Pools

2017-02-21 Thread Mustafa AKIN
Hi, I’m fairly new to Ceph. We are building a shared system on top of Ceph. I know that OpenStack uses a few pools and handles the ownership itself. But would it be undesirable to create a pool for a user in Ceph? It would lead to having too many Placement Groups, is there any bad affect of it?

[ceph-users] NVRAM cache for ceph journal

2017-02-21 Thread Horace
Dear all, Is anybody got any experience on this product? It is a BBU backed NVRAM cache, I think it is most fit on Ceph. https://www.microsemi.com/products/storage/flashtec-nvram-drives/nv1616 Regards, Horace Ng ISL E-Mail Disclaimer (http://www.hkisl.net/index.php?hkisl_page=emailDisclaimer)

Re: [ceph-users] Having many Pools

2017-02-21 Thread Christian Balzer
On Wed, 22 Feb 2017 06:21:41 + Mustafa AKIN wrote: > Hi, I’m fairly new to Ceph. We are building a shared system on top of Ceph. I > know that OpenStack uses a few pools and handles the ownership itself. But > would it be undesirable to create a pool for a user in Ceph? It would lead to > h

Re: [ceph-users] NVRAM cache for ceph journal

2017-02-21 Thread Christian Balzer
Hello, On Wed, 22 Feb 2017 15:07:48 +0800 (HKT) Horace wrote: > Dear all, > > Is anybody got any experience on this product? It is a BBU backed NVRAM > cache, I think it is most fit on Ceph. > > https://www.microsemi.com/products/storage/flashtec-nvram-drives/nv1616 > Not this product, but s