Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread james
Why? Recovery is made from OSDs/SSD, why ceph is heavy on OS disks? There is nothing usefull to read from that disks during a recovery. See this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005378.html ___ ceph-users maili

Re: [ceph-users] s3 user can't create bucket

2013-11-06 Thread Yehuda Sadeh
On Tue, Nov 5, 2013 at 11:28 PM, lixuehui wrote: > Hi all: > > I failed to create bucket with s3 API. the error is 403 'Access Denied'.In > fact ,I've give the user write permission. > { "user_id": "lxh", > "display_name": "=lxh", > "email": "", > "suspended": 0, > "max_buckets": 1000, >

Re: [ceph-users] Running on disks that lose their head

2013-11-06 Thread Sage Weil
On Wed, 6 Nov 2013, Loic Dachary wrote: > Hi Ceph, > > People from Western Digital suggested ways to better take advantage of > the disk error reporting. They gave two examples that struck my > imagination. First there are errors that look like the disk is dying ( > read / write failures ) but

Re: [ceph-users] stopped backfilling process

2013-11-06 Thread Dominik Mostowiec
I hope it will help. crush: https://www.dropbox.com/s/inrmq3t40om26vf/crush.txt ceph osd dump: https://www.dropbox.com/s/jsbt7iypyfnnbqm/ceph_osd_dump.txt -- Regards Dominik 2013/11/6 yy-nm : > On 2013/11/5 22:02, Dominik Mostowiec wrote: >> >> Hi, >> After remove ( ceph osd out X) osd from one

Re: [ceph-users] locking rbd device

2013-11-06 Thread Wolfgang Hennerbichler
On 08/26/2013 09:03 AM, Wolfgang Hennerbichler wrote: > hi list, > > I realize there's a command called "rbd lock" to lock an image. Can > libvirt use this to prevent virtual machines from being started > simultaneously on different virtualisation containers? Anser to myself, only 2 months late

[ceph-users] radosgw questions

2013-11-06 Thread Alessandro Brega
Good day ceph users, I'm new to ceph but installation went well so far. Now I have a lot of questions regarding radosgw. Hope you don't mind... 1. To build a high performance yet cheap radosgw storage, which pools should be placed on ssd and which on hdd backed pools? Upon installation of radosgw

Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Karan Singh
Dear Cephers I have a running ceph cluster that was deployed using ceph-deploy , our next objective is to build a Puppet setup that can be used for long term scaling of ceph infrastructure. It would be a great help if any one can 1) Provide ceph modules for (centos OS) 2) Guidance on how to

[ceph-users] Disk Density Considerations

2013-11-06 Thread Darren Birkett
Hi, I understand from various reading and research that there are a number of things to consider when deciding how many disks one wants to put into a single chassis: 1. Higher density means higher failure domain (more data to re-replicate if you lose a node) 2. More disks means more CPU/memory ho

Re: [ceph-users] stopped backfilling process

2013-11-06 Thread Bohdan Sydor
On Tue, Nov 5, 2013 at 3:02 PM, Dominik Mostowiec wrote: > After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph > starts data migration process. > It stopped on: > 32424 pgs: 30635 active+clean, 191 active+remapped, 1596 > active+degraded, 2 active+clean+scrubbing; > degraded (1.718%

[ceph-users] deployment architecture practices / new ideas?

2013-11-06 Thread Gautam Saxena
We're looking to deploy CEPH on about 8 Dell servers to start, each of which typically contain 6 to 8 harddisks with Perc RAID controllers which support write-back cache (~512 MB usually). Most machines have between 32 and 128 GB RAM. Our questions are as follows. Please feel free to comment on eve

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Andrey Korolyov
On Wed, Nov 6, 2013 at 4:15 PM, Darren Birkett wrote: > Hi, > > I understand from various reading and research that there are a number of > things to consider when deciding how many disks one wants to put into a > single chassis: > > 1. Higher density means higher failure domain (more data to re-r

Re: [ceph-users] Running on disks that lose their head

2013-11-06 Thread james
On 2013-11-06 09:33, Sage Weil wrote: This make me think we really need to build or integrate with some generic SMART reporting infrastructure so that we can identify disks that are failing or going to fail. It could be of use especially for SSD devices used for journals. Unfortunately ther

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Mark Nelson
On 11/06/2013 06:15 AM, Darren Birkett wrote: Hi, I understand from various reading and research that there are a number of things to consider when deciding how many disks one wants to put into a single chassis: 1. Higher density means higher failure domain (more data to re-replicate if you los

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Darren Birkett
On 6 November 2013 14:08, Andrey Korolyov wrote: > > We are looking at building high density nodes for small scale 'starter' > > deployments for our customers (maybe 4 or 5 nodes). High density in this > > case could mean a 2u chassis with 2x external 45 disk JBOD containers > > attached. That'

Re: [ceph-users] Running on disks that lose their head

2013-11-06 Thread Mark Nelson
On 11/06/2013 03:33 AM, Sage Weil wrote: On Wed, 6 Nov 2013, Loic Dachary wrote: Hi Ceph, People from Western Digital suggested ways to better take advantage of the disk error reporting. They gave two examples that struck my imagination. First there are errors that look like the disk is dying (

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Andrey Korolyov
On Wed, Nov 6, 2013 at 6:42 PM, Darren Birkett wrote: > > On 6 November 2013 14:08, Andrey Korolyov wrote: >> >> > We are looking at building high density nodes for small scale 'starter' >> > deployments for our customers (maybe 4 or 5 nodes). High density in >> > this >> > case could mean a 2u

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Carl-Johan Schenström
> See this thread: > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005378.html I can't find anything about Ceph being heavy on OS disks in that thread, only that one shouldn't combine OS and journal on the same disk, since *journals* are heavy on the disks and that might slow

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Dimitri Maziuk
On 2013-11-06 08:37, Mark Nelson wrote: ... Taking this even further, options like the hadoop fat twin nodes with 12 drives in 1U potentially could be even denser, while spreading the drives out over even more nodes. Now instead of 4-5 large dense nodes you have maybe 35-40 small dense nodes. T

Re: [ceph-users] Running on disks that lose their head

2013-11-06 Thread Loic Dachary
An anonymous kernel developer sends this link: http://en.wikipedia.org/wiki/Error_recovery_control On 06/11/2013 08:32, Loic Dachary wrote: > Hi Ceph, > > People from Western Digital suggested ways to better take advantage of the > disk error reporting. They gave two examples that struck my im

Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Don Talton (dotalton)
This will work https://github.com/dontalton/puppet-cephdeploy Just change the unless statements (should only be two) from testing dpkg to testing rpm instead. I'll add an OS check myself, or you can fork and send me a pull request. > -Original Message- > From: ceph-users-boun...@lists.ce

Re: [ceph-users] Running on disks that lose their head

2013-11-06 Thread Loic Dachary
> Putting my sysadmin hat on: > > Once I know a drive has had a head failure, do I trust that the rest of the > drive isn't going to go at an inconvenient moment vs just fixing it right now > when it's not 3AM on Christmas morning? (true story) As good as Ceph is, do > I trust that Ceph is s

Re: [ceph-users] Disk Density Considerations

2013-11-06 Thread Mark Nelson
On 11/06/2013 09:36 AM, Dimitri Maziuk wrote: On 2013-11-06 08:37, Mark Nelson wrote: ... Taking this even further, options like the hadoop fat twin nodes with 12 drives in 1U potentially could be even denser, while spreading the drives out over even more nodes. Now instead of 4-5 large dense n

[ceph-users] Ceph User Committee

2013-11-06 Thread Loic Dachary
Hi Ceph, I would like to open a discussion about organizing a Ceph User Committee. We briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil today during the OpenStack summit. A pad was created and roughly summarizes the idea: http://pad.ceph.com/p/user-committee If there is

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Dinu Vlad
I'm using the latest 3.8.0 branch from raring. Is there a more recent/better kernel recommended? Meanwhile, I think I might have identified the culprit - my SSD drives are extremely slow on sync writes, doing 5-600 iops max with 4k blocksize. By comparison, an Intel 530 in another server (also

[ceph-users] ceph 0.72 with zfs

2013-11-06 Thread Dinu Vlad
Hello, I'm testing the 0.72 release and thought to give a spin to the zfs support. While I managed to setup a cluster on top of a number of zfs datasets, the ceph-osd logs show it's using the "genericfilestorebackend": 2013-11-06 09:27:59.386392 7fdfee0ab7c0 0 genericfilestorebackend(/var/l

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mark Nelson
On 11/06/2013 11:39 AM, Dinu Vlad wrote: I'm using the latest 3.8.0 branch from raring. Is there a more recent/better kernel recommended? I've been using the 3.8 kernel in the precise repo effectively, so I suspect it should be ok. Meanwhile, I think I might have identified the culprit -

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Loic Dachary
On 07/11/2013 01:53, ja...@peacon.co.uk wrote: > It's a great idea... are there any requirements, to be considered? Being a Ceph user seems to be the only requirement to me. Do you have something else in mind ? Cheers > > On 2013-11-06 17:35, Loic Dachary wrote: >> Hi Ceph, >> >> I would lik

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Lincoln Bryant
Seems interesting to me. I've added my name to the pot :) --Lincoln On Nov 6, 2013, at 11:56 AM, Loic Dachary wrote: > > > On 07/11/2013 01:53, ja...@peacon.co.uk wrote: >> It's a great idea... are there any requirements, to be considered? > > Being a Ceph user seems to be the only requiremen

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Mike Dawson
I also have time I could spend. Thanks for getting this started Loic! Thanks, Mike Dawson On 11/6/2013 12:35 PM, Loic Dachary wrote: Hi Ceph, I would like to open a discussion about organizing a Ceph User Committee. We briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mike Dawson
We just fixed a performance issue on our cluster related to spikes of high latency on some of our SSDs used for osd journals. In our case, the slow SSDs showed spikes of 100x higher latency than expected. What SSDs were you using that were so slow? Cheers, Mike On 11/6/2013 12:39 PM, Dinu Vla

Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Karan Singh
A Big thanks Don for creating puppet modules . Need your guidance on - 1) Did you manage to run this on centos 2) What all things can be installed using these modules ( mon , osd , mds OR All ) 3) What all things i need to change in this module Many Thanks Karan Singh - Original Message

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Dinu Vlad
ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i. By "fixed" - you mean replaced the SSDs? Thanks, Dinu On Nov 6, 2013, at 10:25 PM, Mike Dawson wrote: > We just fixed a performance issue on our cluster related to spikes of high > latency on some of our SSDs used for osd journals. In o

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mike Dawson
No, in our case flashing the firmware to the latest release cured the problem. If you build a new cluster with the slow SSDs, I'd be interested in the results of ioping[0] or fsync-tester[1]. I theorize that you may see spikes of high latency. [0] https://code.google.com/p/ioping/ [1] https:

[ceph-users] Manual Installation steps without ceph-deploy

2013-11-06 Thread Trivedi, Narendra
Hi All, I did a fresh install of Ceph (this might be like 10th or 11th install) on 4 new VMs (one admin, one MON and two OSDs) built from CentOS 6.4 (x64) .iso , did a yum update on all of them. They are all running on vmware ESXi 5.1.0. I did everything sage et al suggested (i.e. creation of /

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread james
On 2013-11-06 20:25, Mike Dawson wrote: We just fixed a performance issue on our cluster related to spikes of high latency on some of our SSDs used for osd journals. In our case, the slow SSDs showed spikes of 100x higher latency than expected. Many SSDs show this behaviour when 100% prov

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mark Nelson
On 11/06/2013 03:35 PM, ja...@peacon.co.uk wrote: On 2013-11-06 20:25, Mike Dawson wrote: We just fixed a performance issue on our cluster related to spikes of high latency on some of our SSDs used for osd journals. In our case, the slow SSDs showed spikes of 100x higher latency than expecte

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Craig Lewis
I've done this for some NFS machines (the ones I'm currently migrating to Ceph). It works... but I'm moving back to small SSDs for the OS. I used a pair of USB thumbdrives, in a RAID1. It worked fine for about a year. Then I lost both mirrors in multiple machines, all within an hour. I th

Re: [ceph-users] Manual Installation steps without ceph-deploy

2013-11-06 Thread james
I also had some difficulty with ceph-deploy on CentOS. I eventually moved to Ubuntu 13.04 - and haven't looked back. On 2013-11-06 21:35, Trivedi, Narendra wrote: Hi All, I did a fresh install of Ceph (this might be like 10th or 11th install) on 4 new VMs (one admin, one MON and two OSDs) bui

Re: [ceph-users] Manual Installation steps without ceph-deploy

2013-11-06 Thread Trivedi, Narendra
Unfortunately, I don't have that luxury. Thanks! Narendra -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ja...@peacon.co.uk Sent: Wednesday, November 06, 2013 4:43 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-use

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Gandalf Corvotempesta
Il 06/nov/2013 23:12 "Craig Lewis" ha scritto: > > For my Ceph cluster, I'm going back to SSDs for the OS. Instead of using two of my precious 3.5" bays, I'm buying some PCI 2.5" drive bays: http://www.amazon.com/Syba-Mount-Mobile-2-5-Inch-SY-MRA25023/dp/B0080V73RE, and plugging them into the mot

Re: [ceph-users] Kernel Panic / RBD Instability

2013-11-06 Thread Mikaël Cluseau
Hello, if you use kernel RBD, maybe your issue is linked to this one : http://tracker.ceph.com/issues/5760 Best regards, Mikael. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Craig Lewis
On 11/6/13 15:41 , Gandalf Corvotempesta wrote: With the suggested adapter why not using a standard 2.5'' sata disk? Sata for OS should be enough, no need for an ssd At the time, the smallest SSDs were about half the price of the smallest HDDs. My Ceph nodes are only using ~4GB on /, so smal

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Mark Kirkwood
On 07/11/13 13:54, Craig Lewis wrote: On 11/6/13 15:41 , Gandalf Corvotempesta wrote: With the suggested adapter why not using a standard 2.5'' sata disk? Sata for OS should be enough, no need for an ssd At the time, the smallest SSDs were about half the price of the smallest HDDs. My Ceph

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Alek Paunov
On 06.11.2013 19:35, Loic Dachary wrote: Hi Ceph, I would like to open a discussion about organizing a Ceph User Committee. We briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil today during the OpenStack summit. A pad was created and roughly summarizes the idea: What do

[ceph-users] Error: Package: 1:python-flask-0.9-5.el6.noarch (epel), Requires: python-sphinx

2013-11-06 Thread Eyal Gutkind
Trying to install ceph on my machines. Using RHEL6.3 I get the following error while invoking ceph-deploy. Tried to install sphinx on ceph-node, seems to be success full and installed. Still, it seems that during the installation there is an unresolved dependency. [apollo006][INFO ] Running comm

[ceph-users] radosgw-agent failed to sync object

2013-11-06 Thread lixuehui
Hi all : After we build a region with two zones distributed in two ceph cluster.Start the agent ,it start works! But what we find in the radosgw-agent stdout is that it failed to sync objects all the time .Paste the info: (env)root@ceph-rgw41:~/myproject# ./radosgw-agent -c cluster-data-sync.con

Re: [ceph-users] Puppet Modules for Ceph

2013-11-06 Thread Don Talton (dotalton)
Hi Karan, 1. Not test on CentOS at all. But since the work is done using ceph-deploy it *should* be the same. 2. Everything supported by ceph-deploy (mon, osd, mds). 3. Change the dpkg command to the equivalent rpm command to test whether or not a package is already installed. https://github.

Re: [ceph-users] ceph 0.72 with zfs

2013-11-06 Thread Sage Weil
Hi Dinu, You currently need to compile yourself, and pass --with-zfs to ./configure. Once it is built in, ceph-osd will detect whether the underlying fs is zfs on its own. sage On Wed, 6 Nov 2013, Dinu Vlad wrote: > Hello, > > I'm testing the 0.72 release and thought to give a spin to the

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Sage Weil
On Thu, 7 Nov 2013, Alek Paunov wrote: > When a Ceph architect/admin have a successful, tuned cluster, if she is > willing to share (or just keep as documentation), describes the setup under > her account (with private bits obfuscated). I think this is a great idea. One of the big questions users

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Loic Dachary
Hi Alek, On 07/11/2013 09:03, Alek Paunov wrote:> On 06.11.2013 19:35, Loic Dachary wrote: >> Hi Ceph, >> >> I would like to open a discussion about organizing a Ceph User Committee. We >> briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil >> today during the OpenStack sum

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Loic Dachary
On 07/11/2013 03:59, Mike Dawson wrote: > I also have time I could spend. Cool :-) Would you like to spend the time you have to advance http://wiki.ceph.com/01Planning/02Blueprints/Firefly/Ceph-Brag ? Thanks for getting this started Loic! > > Thanks, > Mike Dawson > > > On 11/6/2013 12:35

Re: [ceph-users] rbd on ubuntu 12.04 LTS

2013-11-06 Thread Gregory Farnum
How interesting; it looks like that command was added post-dumpling and not backported. It's probably suitable for backport; I've also created a ticket to create docs for this (http://tracker.ceph.com/issues/6731). Did you create this cluster on an older development release? That should be the only

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread james
On 2013-11-07 01:03, Alek Paunov wrote: On the other side, I think, the Ceph community is able to help further with the wider and smoother Ceph adoption (further than current mailing list participation in the support) This was my thinking behind a forum format - most sysadmins, and especially

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread james
On 2013-11-07 01:02, Mark Kirkwood wrote: The SSD failures I've seen have all been firmware bugs rather than flash wearout. This has the effect that a RAID1 pair are likley to fail at the same time! Very interesting... and good reason to use two different drives perhaps. The SuperMicro 2U 12

Re: [ceph-users] USB pendrive as boot disk

2013-11-06 Thread Mark Kirkwood
On 07/11/13 20:22, ja...@peacon.co.uk wrote: On 2013-11-07 01:02, Mark Kirkwood wrote: The SSD failures I've seen have all been firmware bugs rather than flash wearout. This has the effect that a RAID1 pair are likley to fail at the same time! Very interesting... and good reason to use two dif