Re: [ceph-users] Use of RDB Kernel module

2013-07-03 Thread Howarth, Chris
Jens (Mike) - thanks for the response here. This is starting to make sense, but one point I am still missing: I have a client which is Fedora18 and I have loaded the rbd kernel module, but the next commands on the Ceph website which involve getting a list of images use the rbd command which is

Re: [ceph-users] Use of RDB Kernel module

2013-07-03 Thread Jens Kristian Søgaard
Hi Chris, [root@ock tmp]# rbd list bash: rbd: command not found... Do I also need to install the Ceph packages to use rbd ? Yes, you will need Ceph installed to be able to use user-space commands like the "rbd" tool. Also how does the client know how to connect to the cluster ? Should /etc

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Pierre BLONDEAU
Le 01/07/2013 19:17, Gregory Farnum a écrit : On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote: On 1 Jul 2013, at 17:37, Gregory Farnum wrote: Oh, that's out of date! PG splitting is supported in Cuttlefish: "ceph osd pool set pg_num " http://ceph.com/docs/master/rados/operations/control/#

Re: [ceph-users] Use of RDB Kernel module

2013-07-03 Thread Howarth, Chris
Many thanks Jens - much appreciated Chris -Original Message- From: Jens Kristian Søgaard [mailto:j...@mermaidconsulting.dk] Sent: 03 July 2013 10:04 To: Howarth, Chris [CCC-OT_IT] Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Use of RDB Kernel module Hi Chris, > [root@ock tmp]# rbd

[ceph-users] consumer nas as osd

2013-07-03 Thread James Harper
Has anyone used a consumer grade NAS (netgear, qnap, dlink, etc) as an OSD before? Qnap TS-421 has a Marvell 2Ghz CPU, 1Gbyte memory, dual gigabit Ethernet, and 4 hotswap disk bays. Is there anything about the Marvell CPU that would make OSD run badly? What about mon? Thanks James ___

Re: [ceph-users] Help Recovering Ceph cluster

2013-07-03 Thread Gregory Farnum
Hey Jon, Sorry nobody's been able to help you so far; I think your emails must have fallen into the cracks. :( I'm going to go through and try to address some of the things that sound like they might still be relevant... On Tue, Jul 2, 2013 at 5:05 PM, Jon wrote: > Now if I could figure out the

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Pierre BLONDEAU
Le 03/07/2013 11:12, Pierre BLONDEAU a écrit : Le 01/07/2013 19:17, Gregory Farnum a écrit : On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh wrote: On 1 Jul 2013, at 17:37, Gregory Farnum wrote: Oh, that's out of date! PG splitting is supported in Cuttlefish: "ceph osd pool set pg_num " http:/

[ceph-users] Ceph Developer Summit: Emperor

2013-07-03 Thread Ross Turk
Hi, all! It's time to start planning our Ceph Developer Summit again!  This summit is where planning for the upcoming Emperor release will happen, and attendance is (as always) open to all. It will be a virtual summit using IRC, Etherpads, and Google Hangouts. Here's our high-level summit calenda

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Michael Lowe
Did you also set the pgp_num, as I understand it the newly created pg's aren't considered for placement until you increase the pgp_num aka effective pg number. Sent from my iPad On Jul 3, 2013, at 11:54 AM, Pierre BLONDEAU wrote: > Le 03/07/2013 11:12, Pierre BLONDEAU a écrit : >> Le 01/07/201

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Gregory Farnum
On Wed, Jul 3, 2013 at 2:12 AM, Pierre BLONDEAU wrote: > Hy, > > Thank you very much for your answer. Sorry for the late reply but a > modification of a cluster of 67T is long ;) > > Actually my pg number was very insufficient : > > ceph osd pool get data pg_num > pg_num: 48 > > As I'm not sure of

[ceph-users] Using ceph with SLES11 SP2

2013-07-03 Thread Hariharan Thantry
Hi folks, I'm trying to get a ceph cluster going on machines running the SLES11 SP2 Xen. Ideally, I'd like it to work without a kernel upgrade (my current kernel is (3.0.13-0.27-xen), because we'd like to deploy this on some funky hardware (telco provider) that currently has this kenel version run

Re: [ceph-users] librbd read caching

2013-07-03 Thread Wido den Hollander
On 07/02/2013 10:29 PM, Sage Weil wrote: Hi Wido! On Tue, 2 Jul 2013, Wido den Hollander wrote: Something in the back of my mind keeps saying that there were plans to implement read caching in librbd, but I haven't been able to find any reference about that. In the tracker however I wasn't abl

[ceph-users] cluster name different from ceph

2013-07-03 Thread Robert Sander
Hi, The documentation states that cluster names that differ from "ceph" are possible and should be used when running multiple clusters on the same hardware. But it seems that all the tools (especially ceph-deploy and the init-scripts) are quite hardcoded with the name "ceph". I try to setup a cl

Re: [ceph-users] Problem with data distribution

2013-07-03 Thread Vladislav Gorbunov
>ceph osd pool set data pg_num 1800 >And I do not understand why the OSD 16 and 19 are hardly used Actually you need to change the pgp_num for real data rebalancing: ceph osd pool set data pgp_num 1800 Check it with the command: ceph osd dump | grep 'pgp_num' 2013/7/3 Pierre BLONDEAU : > Le 01/07

Re: [ceph-users] cluster name different from ceph

2013-07-03 Thread Gregory Farnum
Hmm, yeah. What documentation are you looking at exactly? I don't think we test or have built a lot of the non-"ceph" handling required throughout, though with careful setups it should be possible. -Greg On Wednesday, July 3, 2013, Robert Sander wrote: > Hi, > > The documentation states that clus