Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread Mark Kirkwood
Right, well I suggest changing it back, and adding debug rgw = 20 in the [client.radosgw...] section of ceph.conf and capture the resulting log when you try 'swift stat'. It might reveal the next thing to check. Regards Mark On 11/10/14 16:02, lakshmi k s wrote: Hello Mark - I tried that

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread lakshmi k s
Hello Mark - I tried that as well, but in vain. In fact, that is how I created the endpoint to begin with. Since, that didn't work, I followed Openstack standard which was to include %tenant-id. -Lakshmi. On Friday, October 10, 2014 6:49 PM, Mark Kirkwood wrote: Hi, I think your swift

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread Mark Kirkwood
Hi, I think your swift endpoint: | 2ccd8523954c4491b08b648cfd42ae6c | regionOne | http://gateway.ex.com/swift/v1/AUTH_%(tenant_id)s | http://gateway.ex.com/swift/v1/AUTH_%(tenant_id)s | http://gateway.ex.com/swift/v1 | 77434bc194a3495793b5b4c943248e16 | is the issue. It should be: | 2c

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread lakshmi k s
With latest HA build, I found keystone_modwsgi.conf in /etc/apache2/sites-available and added the chunking like below. We have many controller nodes, but single virtual IP - 192.0.2.21 for which keystone is configured. I have verified keystone setup by executing other services like nova list, c

[ceph-users] 回复: 回复: scrub error with keyvalue backend

2014-10-10 Thread 廖建锋
I like keyvalue backend very much because it if a good performance my request is simple: keep it running, now have another BUG which was fixed in 0.85 : 014-10-11 08:42:01.165836 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60 2014-10-

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Aquino, Ben O
ThankYou Ilya! I have built Centos6.5 kernel 3.6.3 using source linux-3.16.3.tar.gz from www.kernel.org [root@root ~]# uname -a Linux root 3.16.3 #1 SMP Fri Oct 10 06:48:44 PDT 2014 x86_64 x86_64 x86_64 GNU/Linux The kernel modules (includes rbd.ko hopefully…are still compiling…. Since

[ceph-users] CephFS priorities (survey!)

2014-10-10 Thread Sage Weil
Hi everyone, In order to help us prioritize our efforts around CephFS, we'd very much appreciate it if anybody interested complete the below survey: https://www.surveymonkey.com/s/VWYVSZ8 It's a single rank-order list of things we could be working on. Any input you provide will be mos

[ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-10 Thread Loic Dachary
Hi Ceph, TL;DR: please register at http://pad.ceph.com/p/kilo if you're attending the OpenStack summit November 3 - 7 in Paris will be the OpenStack summit in Paris https://www.openstack.org/summit/openstack-paris-summit-2014/, an opportunity to meet with Ceph developers and users. We will hav

[ceph-users] Firefly v0.80.6 issues 9696 and 9732

2014-10-10 Thread Samuel Just
We've gotten some reports of a couple of issues on v0.80.6: 1) #9696: mixed clusters (or upgrading clusters) with v0.80.6 and pre-firefly osds/mons can hit an assert in PG::choose_acting during backfill. The fix appears to be to remove the assert (wip-9696[-firefly]). 2) #9731: there is a bug in

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-10-10 Thread Gregory Farnum
Ugh, "debug journaler", not "debug journaled." That said, the filer output tells me that you're missing an object out of the MDS log. (200.08f5) I think this issue should be resolved if you "dump" the journal to a file, "reset" it, and then "undump" it. (These are commands you can invoke from

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Serge van Ginderachter
On 10 October 2014 16:58, Stéphane DUGRAVOT < stephane.dugra...@univ-lorraine.fr> wrote: > We wonder the availability of professional support in our project > approach. ​We were happy to work with Wido Den Hollander https://www.42on.com/​ ​​ ___ ceph-

Re: [ceph-users] Basic Ceph questions

2014-10-10 Thread John Spray
On Fri, Oct 10, 2014 at 1:19 AM, Marcus White wrote: > FUSE is probably for Ceph file system.. For avoidance of doubt: there are *two* fuse modules in ceph: * RBD: http://ceph.com/docs/master/man/8/rbd-fuse/ * CephFS: http://ceph.com/docs/master/man/8/ceph-fuse/ Cheers, John __

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Aquino, Ben O
Thanks Ilya! Will try building client hosts Fedora20 running version kernel-3.16.3-200.fc20.x86_64.rpm. Then client hosts will used ceph 80.1 from used http://ceph.com/rpm-firefly/fc20/x86_64/. Regards; _ben -Original Message- From: Ilya Dryomov [mailto:ilya.dryo...@inktank.com] Sent:

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Ilya Dryomov
On Fri, Oct 10, 2014 at 9:22 PM, Aquino, Ben O wrote: > Thanks Ilya, > Will try to build client hosts with Centos7 running kernel 3.16. Make sure it's 3.16.3 or later. Thanks, Ilya ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Aquino, Ben O
Thanks Ilya, Will try to build client hosts with Centos7 running kernel 3.16. _ben -Original Message- From: Ilya Dryomov [mailto:ilya.dryo...@inktank.com] Sent: Friday, October 10, 2014 9:34 AM To: Aquino, Ben O Cc: ceph-users@lists.ceph.com; Ferber, Dan; Barnes, Thomas J Subject: Re: [c

Re: [ceph-users] Regarding Primary affinity configuration

2014-10-10 Thread Johnu George (johnugeo)
Thanks for detailed post, Greg. I was trying to configure primary affinity in my cluster but I didn’t see any expected results. As you said, I was just looking into single pg and got wrong. I also had primary affinity value configured for multiple osds in a pg, which makes the calculation more comp

Re: [ceph-users] Basic Ceph questions

2014-10-10 Thread Craig Lewis
> > > Just curious, what kind of applications use RBD? It cant be > applications which need high speed SAN storage performance > characteristics? > Most people seem to be using it as storage for OpenStack. I've heard about people using RDB + Heartbeats to make an HA NFS, while they wait for CephF

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Lionel Bouton
Le 10/10/2014 16:58, Stéphane DUGRAVOT a écrit : > Hi all, > > We (French University) plan to implement a storage platform > (distributed of course)for a volume of750 TB. We are interesting in > CEPH ... > > We wonder the availability of professional support in our project > approach.Do you know a

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Ilya Dryomov
On Fri, Oct 10, 2014 at 8:11 PM, Aquino, Ben O wrote: > Thank You Ilya! > > Here's the output of dmesg during command execution: > > rbd: loaded rbd (rados block device) > libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < > server's 2404a042a42, missing 240 > libceph:

Re: [ceph-users] Openstack keystone with Radosgw

2014-10-10 Thread lakshmi k s
Mark, I am going no where with this. I am going to try with latest OpenStack build (build internal to my company) that has HA support. I will keep you posted. On Thursday, October 9, 2014 10:46 PM, Mark Kirkwood wrote: Oh, I see. That complicates it a wee bit (looks back at your messages)

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Ilya Dryomov
On Fri, Oct 10, 2014 at 8:33 PM, Ilya Dryomov wrote: > On Fri, Oct 10, 2014 at 8:11 PM, Aquino, Ben O wrote: >> Thank You Ilya! >> >> Here's the output of dmesg during command execution: >> >> rbd: loaded rbd (rados block device) >> libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Aquino, Ben O
Thank You Ilya! Here's the output of dmesg during command execution: rbd: loaded rbd (rados block device) libceph: mon1 192.168.101.43:6789 feature set mismatch, my 4a042a42 < server's 2404a042a42, missing 240 libceph: mon1 192.168.101.43:6789 socket error on read libceph: mon2 192.168.1

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Lionel Bouton
Le 10/10/2014 16:58, Stéphane DUGRAVOT a écrit : > Hi all, > > We (French University) plan to implement a storage platform > (distributed of course)for a volume of750 TB. We are interesting in > CEPH ... > > We wonder the availability of professional support in our project > approach.Do you know a

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Loic Dachary
Hi Stéphane, It all depends on your use case. Red Hat is able to provide the best support there is if your use case is unique and complex. If this is for test purposes and teaching distributed software it can probably be self managed by students. Could you describe what it is going to be used f

Re: [ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Alexandre DERUMIER
Hi Stéphane, Inktank also provide support through ceph enterprise, and also early design. http://www.inktank.com/enterprise/ >>Someone told me : " there is no need for professional support , just buy the >>equipment , install it and let it run ceph " I think it's depend if you have time or h

[ceph-users] ceph at "Universite de Lorraine"

2014-10-10 Thread Stéphane DUGRAVOT
Hi all, We (French University) plan to implement a storage platform (distributed of course ) for a volume of 750 TB. We are interesting in CEPH ... We wonder the availability of professional support in our project approach. Do you know a professional integrator that could assist us for :

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-10 Thread SCHAER Frederic
Hi Loic, Patched, and still not working (sorry)... I'm attaching the prepare output, and also a different a "real " udev debug output I captured using " udevadm monitor --environment " (udev.log file) I added a "sync" command in ceph-disk-udev (this did not change a thing), and I noticed that u

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-10 Thread Loic Dachary
Hi Frederic, To be 100% sure it would be great if you could manually patch your local ceph-disk script and change 'partprobe', into 'partx', '-a', in https://github.com/ceph/ceph/blob/v0.80.6/src/ceph-disk#L1284 ceph-disk zap ceph-disk prepare and hopefully it will show up as it should. It wor

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-10 Thread Loic Dachary
Hi Frederic, It looks like this is just because https://github.com/ceph/ceph/blob/v0.80.6/src/ceph-disk#L1284 should call partx instead of partprobe. The udev debug output makes this quite clear http://tracker.ceph.com/issues/9721 I think https://github.com/dachary/ceph/commit/8d914001420e5bf

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-10 Thread Loic Dachary
Hi Frederic, Thanks for the additional information. On 10/10/2014 10:58, SCHAER Frederic wrote: > [root@ceph1 ~]# parted -s /dev/sdc mklabel gpt What happens if you don't do that ? ceph-disk should be able to handle a disk without this step. Other than this the preparation looks fine. > It's

Re: [ceph-users] Rados Gateway and Swift create containers/buckets that cannot be opened

2014-10-10 Thread M Ranga Swami Reddy
Yehuda - With this fix work and removed the "WSGIChunkedRequest On " from the configuration. Its work fine for me without error. Thanks Swami On Thu, Oct 9, 2014 at 11:49 PM, Yehuda Sadeh wrote: > Here's the fix, let me know if you need any help with that. > > Thanks, > Yehuda > > diff --git

Re: [ceph-users] v0.86 released (Giant release candidate)

2014-10-10 Thread Wido den Hollander
On 10/10/2014 11:26 AM, Florian Haas wrote: > Hi Sage, > > On Tue, Oct 7, 2014 at 9:20 PM, Sage Weil wrote: >> This is a release candidate for Giant, which will hopefully be out in >> another week or two (s v0.86). We did a feature freeze about a month ago >> and since then have been doing only

Re: [ceph-users] v0.86 released (Giant release candidate)

2014-10-10 Thread Florian Haas
Hi Sage, On Tue, Oct 7, 2014 at 9:20 PM, Sage Weil wrote: > This is a release candidate for Giant, which will hopefully be out in > another week or two (s v0.86). We did a feature freeze about a month ago > and since then have been doing only stabilization and bug fixing (and a > handful on low-

Re: [ceph-users] ceph-dis prepare : UUID=00000000-0000-0000-0000-000000000000

2014-10-10 Thread SCHAER Frederic
-Message d'origine- De : Loic Dachary [mailto:l...@dachary.org] The failure journal check: ondisk fsid ---- doesn't match expected 244973de-7472-421c-bb25-4b09d3f8d441 and the udev logs DEBUG:ceph-disk:Journal /dev/sdc2 has OSD UUID --

Re: [ceph-users] 回复: scrub error with keyvalue backend

2014-10-10 Thread Haomai Wang
Hi, keyvaluestore is a experiment backend and isn't suitable for non-developer to use. If you want to experience keyvaluestore, you need to compile the newest codes or get the newest release packages. On Fri, Oct 10, 2014 at 4:09 PM, 廖建锋 wrote: > is there anybody can help ? > > > 发件人: ceph-users

Re: [ceph-users] rbd map vsmpool_hp1/rbd9 --id admin -->rbd: add failed: (5) Input/output error

2014-10-10 Thread Ilya Dryomov
On Fri, Oct 10, 2014 at 12:48 AM, Aquino, Ben O wrote: > Hello Ceph Users: > > > > Ceph baremetal client attempting to map device volume via kernel RBD Driver, > resulting in unable to map device volume and outputs I/O error. > > This is Ceph client only, no MDS,OSD or MON running…see I/O error ou

[ceph-users] 回复: scrub error with keyvalue backend

2014-10-10 Thread 廖建锋
is there anybody can help ? 发件人: ceph-users 发送时间: 2014-10-10 13:34 收件人: ceph-users 主题: [ceph-users] scrub error with keyvalue backend Dear ceph, # ceph -s cluster e1f18421-5d20-4c3e-83be-a74b77468d61 health HEALTH_ERR 4