Re: [ceph-users] Migrating from one Ceph cluster to another

2016-06-10 Thread Wido den Hollander
> Op 10 juni 2016 om 0:04 schreef Brian Kroth : > > > I'd considered a similar migration path in the past (slowly rotate > updated osds into the pool and old ones out), but then after watching > some of the bugs and discussions regarding ceph cache tiering and the > like between giant and ham

Re: [ceph-users] RGW integration with keystone

2016-06-10 Thread fridifree
I have isolated the problem. There was error on line 73 in ceph-crypto.cc,I understood that this is a problem in the nss db path. So I removed the line of nss db path and voila, the integration is working and the radosgw can start. I don't know if it is bug or something I did wrong with the nss. T

Re: [ceph-users] un-even data filled on OSDs

2016-06-10 Thread M Ranga Swami Reddy
Thanks Blair. Yes, will plan to upgrade my cluster. Thanks Swami On Fri, Jun 10, 2016 at 7:40 AM, Blair Bethwaite wrote: > Hi Swami, > > That's a known issue, which I believe is much improved in Jewel thanks > to a priority queue added somewhere in the OSD op path (I think). If I > were you I'd

Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Daniel Swarbrick
On 10/06/16 02:33, Christian Balzer wrote: > > > This thread brings back memories of this one: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-April/008792.html > > According to Robert IPoIB still uses IB multicast under the hood even when > from an IP perspective traffic would be uni

Re: [ceph-users] un-even data filled on OSDs

2016-06-10 Thread Max A. Krasilnikov
Hello! On Fri, Jun 10, 2016 at 07:38:10AM +0530, swamireddy wrote: > Blair - Thanks for the details. I used to set the low priority for > recovery during the rebalance/recovery activity. > Even though I set the recovery_priority as 5 (instead of 1) and > client-op_priority set as 63, some of my

[ceph-users] [Infernalis] radosgw x-storage-URL missing account-name

2016-06-10 Thread Ioannis Androulidakis
Hey, I've been looking into the way radosgw makes the authentication in the Swift API. I have created a S3 user ('testuser') and 2 Swift subusers ('testuser:swiftuser1' , 'testuser:swiftuser2').Using CURL I make an authentication requests to radosgw for each swift subuser. In the response I get

Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Corey Kovacs
Infiniband uses multicast internally. It's not something you have a choice with. You won't see it on the local interface any more than you'd see individual drives of a raid 5. I believe it's one of the reasons the connection setup speeds are kept under the requisite 1.2usec limits etc. On Jun 10

Re: [ceph-users] RDMA/Infiniband status

2016-06-10 Thread Christian Balzer
Hello, What I took from the longish thread on the OFED ML was that certain things (and more than you'd think) with IPoIB happen in multicast, not ALL of them. For the record, my bog standard QDR, IPoIB clusters can do anywhere from 14 to 21Gb/s with iperf3 and about 20-30% less with NPtcp (netp

[ceph-users] How to debug hung on dead OSD?

2016-06-10 Thread George Shuklin
Hello. I'm doing small experimental setup. I have two hosts with few OSD, one OSD has been put down intentionaly, but I regardless the second (alive) OSD on different host, I see that all IO (rbd, and even rados get) hung for long time (more than 30 minutes already). My configuration: -9 2

Re: [ceph-users] How to debug hung on dead OSD?

2016-06-10 Thread Christian Balzer
On Fri, 10 Jun 2016 16:51:07 +0300 George Shuklin wrote: > Hello. > > I'm doing small experimental setup. That's likely your problem. Aside from my response below, really small clusters can wind up in spots where CRUSH (or at least certain versions of it) can't place things correctly. > I have

[ceph-users] Changing the fsid of a ceph cluster

2016-06-10 Thread Vincenzo Pii
I have changed the fsid of a ceph cluster by redeploying it with ceph-ansible (the change was intentional, the cluster is new and empty). After the change, I had to restart all the OSDs (with start ceph-osd id=x on each node). Now the cluster seems to work, but I have two issues: 1. The ID repor

[ceph-users] librados and multithreading

2016-06-10 Thread Юрий Соколов
Good day, all. I found this issue: https://github.com/ceph/ceph/pull/5991 Did this issue affected librados ? Were it safe to use single rados_ioctx_t from multiple threads before this fix? -- With regards, Sokolov Yura aka funny_falcon ___ ceph-users

[ceph-users] rgw pool names

2016-06-10 Thread Deneau, Tom
When I start radosgw, I create the pool .rgw.buckets manually to control whether it is replicated or erasure coded and I let the other pools be created automatically. However, I have noticed that sometimes the pools get created with the "default" prefix, thus rados lspools .rgw.root default.rg

[ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-10 Thread Michael Kuriger
Hi Everyone, I’ve been running jewel for a while now, with tunables set to hammer. However, I want to test the new features but cannot find a fully compatible Kernel for CentOS 7. I’ve tried a few of the elrepo kernels - elrepo-kernel 4.6 works perfectly in CentOS 6, but not CentOS 7. I’ve tr

Re: [ceph-users] rgw pool names

2016-06-10 Thread Yehuda Sadeh-Weinraub
On Fri, Jun 10, 2016 at 11:44 AM, Deneau, Tom wrote: > When I start radosgw, I create the pool .rgw.buckets manually to control > whether it is replicated or erasure coded and I let the other pools be > created automatically. > > However, I have noticed that sometimes the pools get created with th

Re: [ceph-users] rgw pool names

2016-06-10 Thread Deneau, Tom
Ah that makes sense. The places where it was not adding the "default" prefix were all pre-jewel. -- Tom > -Original Message- > From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com] > Sent: Friday, June 10, 2016 2:36 PM > To: Deneau, Tom > Cc: ceph-users > Subject: Re: [ceph-users] rg

[ceph-users] Help recovering failed cluster

2016-06-10 Thread John Blackwood
We're looking for some assistance recovering data from a failed ceph cluster; or some help determining if it is even possible to recover any data. Background: - We were using Ceph with Proxmox following the instructions Proxmox provides (https://pve.proxmox.com/wiki/Ceph_Server

Re: [ceph-users] Help recovering failed cluster

2016-06-10 Thread John Blackwood
Had aa little bit of help in IRC, was asked to attach the OSD tree, health detail and crush map. PG dump is included at the link below - too big to attach directly. https://drive.google.com/open?id=0B3Dsc6YwKik_T0NPZm1oYmdLT0k -- JOHN BLACKWOOD P: 905 444 9166 F: 905 668 8778 Chief Technic