Re: [ceph-users] Problems with OSDs (cuttlefish)

2013-06-06 Thread Alvaro Izquierdo Jimeno
Thanks for answer! Álvaro. -Mensaje original- De: Sage Weil [mailto:s...@inktank.com] Enviado el: jueves, 06 de junio de 2013 17:24 Para: Ilja Maslov CC: Alvaro Izquierdo Jimeno; John Wilkins; ceph-users@lists.ceph.com Asunto: Re: [ceph-users] Problems with OSDs (cuttlefish) On Thu, 6 Ju

Re: [ceph-users] mount.ceph Cannot allocate memory

2013-06-06 Thread Gregory Farnum
[Please keep all replies on the list.] So you're doing all of these operations on the same server? We don't recommend using the kernel client on the same server as an OSD, but that is unlikely to be causing your issue here. Still, ENOMEM is most likely happening in your kernel, and probably indica

Re: [ceph-users] Issues with a fresh cluster and HEALTH_WARN

2013-06-06 Thread Joshua Mesilane
Well, I had a closer look at the logs and for some reason, while it listed the OSDs as being up and in to begin with, fairly shortly after I sent this email the two on one of the hosts went down. Turned out that the OSDs weren't mounted for some reason. After re-mounting and restarting the se

Re: [ceph-users] Issues with a fresh cluster and HEALTH_WARN

2013-06-06 Thread Jeff Bailey
You need to fix your clocks (usually with ntp). According to the log message they can be off by 50ms and yours seems to be about 85ms off. On 6/6/2013 8:40 PM, Joshua Mesilane wrote: > Hi, > > I'm currently evaulating ceph as a solution to some HA storage that > we're looking at. To test I have

[ceph-users] Issues with a fresh cluster and HEALTH_WARN

2013-06-06 Thread Joshua Mesilane
Hi, I'm currently evaulating ceph as a solution to some HA storage that we're looking at. To test I have 3 servers, with two disks to be used for OSDs on them (journals on the same disk as the OSD). I've deployed the cluster with 3 mons (one on each server) 6 OSDs (2 on each server) and 3 MDS

Re: [ceph-users] v0.61.3 released

2013-06-06 Thread Gary Lowell
There was a mistake in the build script that caused the rpm signing to get skipped. That's been fixed and updated rpms have been pushed out. Cheers, Gary On Jun 6, 2013, at 4:18 PM, Joshua Mesilane wrote: > Hey, > > I'm getting RPM signing errors when trying to install this latest release: >

Re: [ceph-users] v0.61.3 released

2013-06-06 Thread Gary Lowell
Hi Josh - I'll check out the rpms and fix that if needed. Cheers, Gary On Jun 6, 2013, at 4:18 PM, Joshua Mesilane wrote: > Hey, > > I'm getting RPM signing errors when trying to install this latest release: > > Package libcephfs1-0.61.3-0.el6.x86_64.rpm is not signed > > Running CentOS 6.

Re: [ceph-users] v0.61.3 released

2013-06-06 Thread Joshua Mesilane
Hey, I'm getting RPM signing errors when trying to install this latest release: Package libcephfs1-0.61.3-0.el6.x86_64.rpm is not signed Running CentOS 6.4 Cheers, Josh On 06/07/2013 04:56 AM, Sage Weil wrote: This is a much-anticipated point release for the v0.61 Cuttlefish stable series. I

Re: [ceph-users] How many Pipe per Ceph OSD daemon will keep?

2013-06-06 Thread Gregory Farnum
On Thu, Jun 6, 2013 at 3:37 PM, Chen, Xiaoxi wrote: > But in ceph_user,Mark, and some users are really discussing some supermicro > chassis that can have 24 spindles per 2u or 36/48 spindles per 4U > > even with 20 osds per node,the thread num will more than 5000,and if take > internal heartbeat

Re: [ceph-users] How many Pipe per Ceph OSD daemon will keep?

2013-06-06 Thread Chen, Xiaoxi
But in ceph_user,Mark, and some users are really discussing some supermicro chassis that can have 24 spindles per 2u or 36/48 spindles per 4U even with 20 osds per node,the thread num will more than 5000,and if take internal heartbeat/replication pipe into account, it should be around 10K threa

Re: [ceph-users] mount.ceph Cannot allocate memory

2013-06-06 Thread Gregory Farnum
On Thu, Jun 6, 2013 at 1:19 PM, Timofey Koolin wrote: > ceph -v > ceph version 0.61.3 (92b1e398576d55df8e5888dd1a9545ed3fd99532) > > mount.ceph l6:/ /ceph -o name=admin,secret=... > mount error 12 = Cannot allocate memory > > I have cluster with 1 mon, 2 osd, ipv6 network. > > rbd work fine. Ceph

[ceph-users] mount.ceph Cannot allocate memory

2013-06-06 Thread Timofey Koolin
ceph -v ceph version 0.61.3 (92b1e398576d55df8e5888dd1a9545ed3fd99532) mount.ceph l6:/ /ceph -o name=admin,secret=... mount error 12 = Cannot allocate memory I have cluster with 1 mon, 2 osd, ipv6 network. rbd work fine. -- Blog: www.rekby.ru ___ cep

[ceph-users] v0.61.3 released

2013-06-06 Thread Sage Weil
This is a much-anticipated point release for the v0.61 Cuttlefish stable series. It resolves a number of issues, primarily with monitor stability and leveldb trimming. All v0.61.x uses are encouraged to upgrade. Upgrading from bobtail: * There is one known problem with mon upgrades from bobtai

Re: [ceph-users] OSD recovery over probably corrupted data

2013-06-06 Thread Andrey Korolyov
Yep, it was so. Disks was mounted with nobarrier(bad idea for XFS :) ), and in my case corruption happened despite presence of battery-backed cache. On Thu, Jun 6, 2013 at 10:44 PM, David Zafman wrote: > > It looks like the enclosure failure caused data corruption. Otherwise, your > OSD should

Re: [ceph-users] OSD recovery over probably corrupted data

2013-06-06 Thread David Zafman
It looks like the enclosure failure caused data corruption. Otherwise, your OSD should have come back online as it would after a power failure. David Zafman Senior Developer http://www.inktank.com On May 26, 2013, at 9:09 AM, Andrey Korolyov wrote: > Hello, > > Today a large disk enclosur

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-06-06 Thread Gregory Farnum
We don't have your logs (vger doesn't forward them). Can you describe the situation more completely in terms of what failures occurred and what steps you took? (Also, this should go on ceph-users. Adding that to the recipients list.) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.co

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Yehuda Sadeh
I opened issue #5261, and pushed a fix (on top of next) to wip-5261. This will need to be cherry-picked into cuttlefish. I also created a basic test in our functional s3 tests (which will probably need to be extended more). Thanks, Yehuda On Thu, Jun 6, 2013 at 9:49 AM, Mike Bryant wrote: > Yes,

Re: [ceph-users] ceph repair details

2013-06-06 Thread David Zafman
Repair does the equivalent of a deep-scrub to find problems. This mostly is reading object data/omap/xattr to create checksums and compares them across all copies. When a discrepancy is identified an arbitrary copy which did not have I/O errors is selected and used to re-write the other repli

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Mike Bryant
Yes, that change lets me get and set cors policies as I would expect. Thanks, Mike On 6 June 2013 17:45, Yehuda Sadeh wrote: > Looking at it, it fails in a much basic level than I expected. My > guess off the cuff is that the 'cors' sub-resource needs to be part of > the canonicalized header, wh

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Yehuda Sadeh
Looking at it, it fails in a much basic level than I expected. My guess off the cuff is that the 'cors' sub-resource needs to be part of the canonicalized header, whereas we probably assume that it doesn't (it doesn't appear on the Amazon list of sub-resources in the S3 auth docs). Just for the sak

Re: [ceph-users] How many Pipe per Ceph OSD daemon will keep?

2013-06-06 Thread Gregory Farnum
On Thu, Jun 6, 2013 at 12:25 AM, Chen, Xiaoxi wrote: > > Hi, > From the code, each pipe (contains a TCP socket) will fork 2 > threads, a reader and a writer. We really observe 100+ threads per OSD daemon > with 30 instances of rados bench as clients. > But this number seems a b

Re: [ceph-users] ceph-osd constantly crashing

2013-06-06 Thread Gregory Farnum
On Wed, Jun 5, 2013 at 11:35 PM, Artem Silenkov wrote: > Good day! > > Thank you, but it's not clear for me what is a bottleneck here. > > - Hardware node - load average, disk IO > > - underlying file system problem on osd or disk bad. > > - ceph journal problem > > Ceph osd partition is a part of

Re: [ceph-users] Problems with OSDs (cuttlefish)

2013-06-06 Thread Dewan Shamsul Alam
Hi Sage, Thank you a lot. This explains a lot of questions I had. You and John have been very helpful. I will try again with my ceph setup. Best Regards, Dewan On Thu, Jun 6, 2013 at 9:23 PM, Sage Weil wrote: > On Thu, 6 Jun 2013, Ilja Maslov wrote: > > Hi, > > > > I do not think that ceph-de

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Mike Bryant
I did, and I do. (Well, having just tried it again under debug mode) http://pastebin.com/sRHWR6Rh On 6 June 2013 16:15, Yehuda Sadeh wrote: > I guess you run set_cors() with a config object? Do you have the rgw > logs for that operation? > > > On Thu, Jun 6, 2013 at 8:02 AM, Mike Bryant wrote: >

Re: [ceph-users] Problems with OSDs (cuttlefish)

2013-06-06 Thread Sage Weil
On Thu, 6 Jun 2013, Ilja Maslov wrote: > Hi, > > I do not think that ceph-deploy osd prepare/deploy/create actually works > when run on a partition. It was returning successfully for me, but > wouldn't actually add any OSDs to the configuration and associate them > with a host. No errors, but

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Yehuda Sadeh
I guess you run set_cors() with a config object? Do you have the rgw logs for that operation? On Thu, Jun 6, 2013 at 8:02 AM, Mike Bryant wrote: > > No, I'm using the same user. > I have in fact tried it as close as possible to the actual creation, > to be sure I'm using the same credentials. >

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Mike Bryant
No, I'm using the same user. I have in fact tried it as close as possible to the actual creation, to be sure I'm using the same credentials. i.e. using boto, bucket = boto.create_bucket(...), followed by, bucket.set_cors(). Mike On 6 June 2013 15:51, Yehuda Sadeh wrote: > Are you trying to set t

Re: [ceph-users] Having trouble using CORS in radosgw

2013-06-06 Thread Yehuda Sadeh
Are you trying to set the CORS header using a user other than the user who created the bucket? Yehuda On Wed, Jun 5, 2013 at 8:25 AM, Mike Bryant wrote: > Hi, > I'm having trouble setting a CORS policy on a bucket. > Using the boto python library, I can create a bucket and so on, but > when I tr

Re: [ceph-users] Block Storage thin provisioning with Ubuntu 12.04?

2013-06-06 Thread Damien Churchill
On 6 June 2013 15:02, Morgan KORCHIA wrote: > As far as I know, thin provisioning is not available in ubuntu 12.04 since > it does not include LVM2. Hi, Fairly sure it does. $ lvchange --version LVM version: 2.02.66(2) (2010-05-20) Library version: 1.02.48 (2010-05-20) Driver version:

[ceph-users] Block Storage thin provisioning with Ubuntu 12.04?

2013-06-06 Thread Morgan KORCHIA
Hi all, Sorry if the question has already been answered. We are thinking about using Ceph for our OpenStack implementation. As far as I know, thin provisioning is not available in ubuntu 12.04 since it does not include LVM2. Does Ceph have any dependency with LVM or is thin provisioning support

Re: [ceph-users] Problems with OSDs (cuttlefish)

2013-06-06 Thread Alvaro Izquierdo Jimeno
Hi Same behavior with ceph version 0.61.2. But with ceph version 0.63-359-g02946e5 the ceph-deploy osd prepare doesn't finish ever Thanks, Álvaro. -Mensaje original- De: Ilja Maslov [mailto:ilja.mas...@openet.us] Enviado el: jueves, 06 de junio de 2013 14:43 Para: Alvaro Izquierdo

Re: [ceph-users] Problems with OSDs (cuttlefish)

2013-06-06 Thread Ilja Maslov
Hi, I do not think that ceph-deploy osd prepare/deploy/create actually works when run on a partition. It was returning successfully for me, but wouldn't actually add any OSDs to the configuration and associate them with a host. No errors, but also no result, had to revert back to using mkceph

[ceph-users] cannot use "dd" to initialize rbd

2013-06-06 Thread Shu, Xinxin
Hi all I want to do some performance tests on kernel rbd, and I setup a ceph cluster with 4 hosts, every host has 20 osds, the journal of osds is on a separate SSD partition. First I created 48 rbds and mapped them to six clients, 8 rbds for every clients, then I executed the following command

Re: [ceph-users] core dump: qemu-img info -f rbd

2013-06-06 Thread Oliver Francke
Hi, On 06/06/2013 08:12 AM, Jens Kristian S?0?3gaard wrote: Hi, I got a core dump when executing: root@ceph-node1:~# qemu-img info -f rbd rbd:vm_disks/box1_disk1 Try leaving out "-f rbd" from the command - I have seen that make a difference before. ... or try -f raw. Same is for the "-dri

[ceph-users] Ceph in OpenNebulaConf2013. Deadline for talk proposals

2013-06-06 Thread Jaime Melis
Hello everyone, We have just published an extended keynote line-up for the OpenNebula Conference 2013 (http://blog.opennebula.org/?p=4707) that includes experts from leading institutions using OpenNebula. The first ever OpenNebula Conference will be held on Sept. 24-26 in Berlin and is intended to

[ceph-users] How many Pipe per Ceph OSD daemon will keep?

2013-06-06 Thread Chen, Xiaoxi
Hi, From the code, each pipe (contains a TCP socket) will fork 2 threads, a reader and a writer. We really observe 100+ threads per OSD daemon with 30 instances of rados bench as clients. But this number seems a bit crazy, if I have a 40 disks node, thus I will have 40 OSDs, we