[ceph-users] Default pool ruleset problem

2014-05-06 Thread Cao, Buddy
Hi, After I setup ceph cluster thru mkcephfs, there are 3 default pools named metadata/data/rbd. And I notice that the 3 default pools assign to ruleset 0,1,2 respectively. What if I refresh the crushmap with only 1 ruleset? It means there are at least two default pools would have no correspo

Re: [ceph-users] Default pool ruleset problem

2014-05-06 Thread Jean-Charles Lopez
Hi Before removing the rules, modify all pools to use the same crus rule with the following command. ceph osd pool set pool-name crush_ruleset n Pool-name is the name of the pool N is the rule number you wish to keep And this will make sure your cluster remains healthy. My 2 cts JC Sent from

[ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
Hi to all, I would like to replace a disk used as journal (one partition for each OSD) Which is the safest method to do so? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Andrija Panic
Good question - I'm also interested. Do you want to movejournal to dedicated disk/partition i.e. on SSD or just replace (failed) disk with new/bigger one ? I was thinking (for moving jorunal to dedicated disk) about changing symbolic links or similar, on /var/lib/ceph/osd/osd-x/journal... ? Regar

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
2014-05-06 12:39 GMT+02:00 Andrija Panic : > Good question - I'm also interested. Do you want to movejournal to dedicated > disk/partition i.e. on SSD or just replace (failed) disk with new/bigger one > ? I would like to replace the disk with a bigger one (in fact, my new disk is smaller, but this

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Andrija Panic
If you have dedicated disk for Journal, that you want to replace - consider (this may be not optimal, but crosses my mind...) stoping OSD (if that is possible), maybe with "no-out" etc, then DD old disk to new one, and just resize file system and partitions if needed... I guess there is more elega

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Dan Van Der Ster
I've followed this recipe successfully in the past: http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster On May 6, 2014 12:34 PM, Gandalf Corvotempesta wrote: > > Hi to all, > I would like to replace a disk used as journal (one partition for eac

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
2014-05-06 13:08 GMT+02:00 Dan Van Der Ster : > I've followed this recipe successfully in the past: > > http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster I'll try but my ceph.conf doesn't have any "osd journal" setting set (i'm using ceph-ansibl

Re: [ceph-users] About ceph.conf

2014-05-06 Thread John Wilkins
Buddy, There are significant changes between mkcephfs and ceph-deploy. The mkcephfs script is fairly antiquated now and you should be using ceph-deploy or some other method of deployment in our newer releases. The mkcephfs script would read the ceph.conf file during deployment and bootstrap monit

[ceph-users] RBD on Mac OS X

2014-05-06 Thread Pavel V. Kaygorodov
Hi! I want to use ceph for time machine backups on Mac OS X. Is it possible to map RBD or mount CephFS on mac directly, for example, using osxfuse? Or it is only way to do this -- make an intermediate linux server? Pavel. ___ ceph-users mailing list c

Re: [ceph-users] RBD on Mac OS X

2014-05-06 Thread Wolfgang Hennerbichler
I'd use an rbd to iscsi software and attach it via iscsi on mac os x. On Tue, May 06, 2014 at 03:28:21PM +0400, Pavel V. Kaygorodov wrote: > Hi! > > I want to use ceph for time machine backups on Mac OS X. > Is it possible to map RBD or mount CephFS on mac directly, for example, using > osxfuse

Re: [ceph-users] RBD on Mac OS X

2014-05-06 Thread Andrey Korolyov
You can do this for sure using iSCSI reexport feature, AFAIK no working RBD implementation for OSX exists. On Tue, May 6, 2014 at 3:28 PM, Pavel V. Kaygorodov wrote: > Hi! > > I want to use ceph for time machine backups on Mac OS X. > Is it possible to map RBD or mount CephFS on mac directly, for

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Fred Yang
On May 6, 2014 7:12 AM, "Gandalf Corvotempesta" < gandalf.corvotempe...@gmail.com> wrote: > > 2014-05-06 13:08 GMT+02:00 Dan Van Der Ster : > > I've followed this recipe successfully in the past: > > > > http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_

[ceph-users] Ceph installation

2014-05-06 Thread Sakhi Hadebe
Hi, I am doing a fresh ceph installation and I am hit by the error below when restarting the ceph service: root@nodeA# service ceph restart === osd.0 === === osd.0 === Stopping Ceph osd.0 on nodeA...done === osd.0 === df: `/var/lib/ceph/osd/ceph-0/.': No such file or directory df: no

Re: [ceph-users] Ceph installation

2014-05-06 Thread Srinivasa Rao Ragolu
First of all you need to create /var/lib/ceph/osd/ceph-0 folder. Then try. Thanks, Srinivas. On Tue, May 6, 2014 at 5:41 PM, Sakhi Hadebe wrote: > Hi, > > I am doing a fresh ceph installation and I am hit by the error below > when restarting the ceph service: > > root@nodeA# service ceph re

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Pavel V. Kaygorodov
Hi! I'm not a specialist, but I think it will be better to move journals to other place first (stopping each OSD, moving it journal file to a HDD, and starting again), replace SSD and move journals to a new drive, again, one-by-one. The "no-out" mode can help. Pavel. 06 мая 2014 г., в 14:34

[ceph-users] View or set Policy

2014-05-06 Thread Shashank Puntamkar
How to view policy associated with bucket?when I give command to view policy , it returns no results. The command for bucket named as "test" is given as " radosgw-admin policy bucket:test " ( hope it is correct) Or do I need to associate Policy to bucket first? If so , then how? Regards __

[ceph-users] Fwd: Ceph perfomance issue!

2014-05-06 Thread Sebastien Han
What does ‘very high load’ mean for you? Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance.com Address : 11 bis, rue Roquépine - 75008 Paris Web : www.enovance.com - Twitter : @enovance Begin forward

Re: [ceph-users] About ceph.conf

2014-05-06 Thread Sage Weil
Hi Wei, On Tue, 6 May 2014, Cao, Buddy wrote: > According to the change of ceph-deploy from mkcephfs, I feel ceph.conf > is not a recommended way to manage ceph configuration. Is it true? If > so, how do I get the configurations previous configured in ceph.conf? > e.g., data drive, journal driv

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
2014-05-06 14:09 GMT+02:00 Fred Yang : > The journal location is not in ceph.conf, check > /var/lib/ceph/osd/ceph-X/journal, which is a symlink to the osd's journal > device. Symlink are pointing to partition UUID this prevent the replacement without manual intervetion: journal -> /dev/disk/by-pa

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
2014-05-06 16:33 GMT+02:00 Gandalf Corvotempesta : > Symlink are pointing to partition UUID this prevent the replacement > without manual intervetion: > > journal -> /dev/disk/by-partuuid/b234da10-dcad-40c7-aa97-92d35099e5a4 > > is not possible to create symlink pointing to a device ? > My new disk

[ceph-users] advice with hardware configuration

2014-05-06 Thread Xabier Elkano
Hi, I'm designing a new ceph pool with new hardware and I would like to receive some suggestion. I want to use a replica count of 3 in the pool and the idea is to buy 3 new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have in mind two configurations: 1- With journal in SSDs O

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Wido den Hollander
On 05/06/2014 05:07 PM, Xabier Elkano wrote: Hi, I'm designing a new ceph pool with new hardware and I would like to receive some suggestion. I want to use a replica count of 3 in the pool and the idea is to buy 3 new servers with a 10-drive 2,5" chassis each and 2 10Gbps nics. I have in mind t

Re: [ceph-users] Migrate system VMs from local storage to CEPH

2014-05-06 Thread Wido den Hollander
On 05/05/2014 11:40 PM, Andrija Panic wrote: Hi Wido, thanks again for inputs. Everything is fine, except for the Software Router - it doesn't seem to get created on CEPH, no matter what I try. There is a separate offering for the VR, have you checked that? But this is more something for th

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Sergey Malinin
My vision of a well built node is when number of journal disks is equal to number of data disks. You definitely don't want to lose 3 journals at once in case of single drive failure. > 06 мая 2014 г., в 18:07, Xabier Elkano написал(а): > > > Hi, > > I'm designing a new ceph pool with new har

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Xabier Elkano
El 06/05/14 17:57, Sergey Malinin escribió: > My vision of a well built node is when number of journal disks is equal to > number of data disks. You definitely don't want to lose 3 journals at once in > case of single drive failure. thanks for your resonse. This is true, a single SSD failure also

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Christian Balzer
On Tue, 6 May 2014 18:57:04 +0300 Sergey Malinin wrote: > My vision of a well built node is when number of journal disks is equal > to number of data disks. You definitely don't want to lose 3 journals at > once in case of single drive failure. > While that certainly is true not everybody is havi

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Xabier Elkano
El 06/05/14 17:51, Wido den Hollander escribió: > On 05/06/2014 05:07 PM, Xabier Elkano wrote: >> >> Hi, >> >> I'm designing a new ceph pool with new hardware and I would like to >> receive some suggestion. >> I want to use a replica count of 3 in the pool and the idea is to buy 3 >> new servers wi

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Xabier Elkano
El 06/05/14 18:17, Christian Balzer escribió: > On Tue, 6 May 2014 18:57:04 +0300 Sergey Malinin wrote: > >> My vision of a well built node is when number of journal disks is equal >> to number of data disks. You definitely don't want to lose 3 journals at >> once in case of single drive failure. >

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Christian Balzer
Hello, On Tue, 06 May 2014 17:07:33 +0200 Xabier Elkano wrote: > > Hi, > > I'm designing a new ceph pool with new hardware and I would like to > receive some suggestion. > I want to use a replica count of 3 in the pool and the idea is to buy 3 > new servers with a 10-drive 2,5" chassis each an

Re: [ceph-users] RBD on Mac OS X

2014-05-06 Thread Mike Bryant
We're using netatalk on top of cephfs for serving timemachine out to clients. It's so-so - Apple's support for timemachine on other afp servers isn't brilliant. On 6 May 2014 12:32, Andrey Korolyov wrote: > You can do this for sure using iSCSI reexport feature, AFAIK no > working RBD implement

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Sergey Malinin
> but the journal SSDs are intel SC3700 and them should be very reliable. Cabling problems, controller failure, intermittent disconnects,... I've seen it all, who knows what's going to happen to yours :) ___ ceph-users mailing list ceph-users@lists.cep

Re: [ceph-users] RBD on Mac OS X

2014-05-06 Thread LaSalle, Jurvis
Apple’s support for TM on it’s own “servers" isn’t brilliant either. From: Mike Bryant mailto:m...@mikebryant.me.uk>> Date: Tuesday, May 6, 2014 at 1:00 PM To: Andrey Korolyov mailto:and...@xdel.ru>> Cc: "ceph-us...@ceph.com" mailto:ceph-us...@ceph.com>> Subject: Re: [

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Dimitri Maziuk
On 05/06/2014 11:34 AM, Xabier Elkano wrote: OS: 2xSSD intel SC3500 100G Raid 1 Why would you put os on ssds? If buy enough ram so it doesn't swap, about the only i/o on the system drive will be logging. All that'd do is wear out your ssds, not that there's much of that going on. (Our server

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Cedric Lemarchand
Le 06/05/2014 17:07, Xabier Elkano a écrit : > the goal is the performance over the capacity. I am sure you already consider the "full SSD" option, did you ? -- Cédric ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Craig Lewis
On 5/6/14 03:34 , Gandalf Corvotempesta wrote: Hi to all, I would like to replace a disk used as journal (one partition for each OSD) Which is the safest method to do so? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listi

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Sergey Malinin
If you plan to scale up in the future you could consider the following config to start with: Pool size=2 3 x servers with OS+journal on 1 ssd, 3 journal ssds, 4 x 900 gb data disks. It will get you 5+ TB capacity and you will be able to increase pool size to 3 at some point in time. > The idea

Re: [ceph-users] advice with hardware configuration

2014-05-06 Thread Craig Lewis
On 5/6/14 08:07 , Xabier Elkano wrote: Hi, I'm designing a new ceph pool with new hardware and I would like to 1- With journal in SSDs 2- With journal in a partition in the spinners. I don't have enough experience to give advice, so I'll tell my story. I'm using RGW, and my only perfor

Re: [ceph-users] some unfound object

2014-05-06 Thread Gregory Farnum
[ Re-adding list ] On Mon, May 5, 2014 at 8:33 PM, vernon1...@126.com wrote: > When I invoke "mark_unfound_lost ", it had a error: > > [root@ceph2 ~]# ceph pg 50.5 mark_unfound_lost revert > Error EINVAL: pg has 43 unfound objects but we haven't probed all sources, > not marking lost > > What can

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Gandalf Corvotempesta
2014-05-06 19:40 GMT+02:00 Craig Lewis : > I haven't tried this yet, but I imagine that the process is similar to > moving your journal from the spinning disk to an SSD. My journals are on SSD. I have to replace that SSD. ___ ceph-users mailing list ceph

Re: [ceph-users] Migrate system VMs from local storage to CEPH

2014-05-06 Thread Andrija Panic
I appologize, I did post to wrong mailing list, to much emails these days :) @Wido, yes I did check and there is separete offering butyou can't change it the same way you change for CPVM and SSVM... Will post to CS mailing, sorry for this.. On 6 May 2014 17:52, Wido den Hollander wrote: > On 0

Re: [ceph-users] cannot revert lost objects

2014-05-06 Thread Craig Lewis
I ran into this problem too. I don't know what I did to fix it. I tried ceph pg scrub , ceph pg deep-scrub , and ceph osd scrub . None of them had an immediate effect. In the end, it finally cleared several days later in the middle of the night. I can't even say what or when it finally cle

[ceph-users] Open Source Storage Hackathon Before OpenStack Summit

2014-05-06 Thread Patrick McGarry
Hey Cephers, Just wanted to let you guys know that there will be a Red Hat Open Source Storage Hackathon the Sunday before ODS (May 11). For more details you can check out the brief blog writeup: http://ceph.com/community/storage-hackathon-openstack-developer-summit/ Make sure you swing by the

[ceph-users] Ceph OpenStack Integration

2014-05-06 Thread Derek Yarnell
Hi, I have Ceph cluster and a OpenStack install running Havana (using RDO). So when I create a volume from a image and boot the instance from it in the Image Name field when listing the instances in OpenStack is empty. Is this normal? Thanks, derek -- Derek T. Yarnell University of Maryland In

[ceph-users] v0.80 Firefly released

2014-05-06 Thread Sage Weil
We did it! Firefly v0.80 is built and pushed out to the ceph.com repositories. This release will form the basis for our long-term supported release Firefly, v0.80.x. The big new features are support for erasure coding and cache tiering, although a broad range of other features, fixes, and impro

[ceph-users] Delete pool .rgw.bucket and objects within it

2014-05-06 Thread Thanh Tran
Hi, If i use command "ceph osd pool delete .rgw.bucket .rgw.bucket --yes-i-really-really-mean-it" to delete the pool .rgw.bucket, will this delete the pool, its objects and clean the data on osds? Best regards, Thanh Tran ___ ceph-users mailing list cep

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Indra Pramana
Hi Craig and all, I checked Sébastien Han's blog post, it seems that the way how the journal was mounted is a bit different, is it because the article was based on older version of Ceph? $ sudo mount /dev/sdc /journal $ ceph-osd -i 2 --mkjournal 2012-08-16 13:29:58.735095 7ff0c4b58780 -1 cr

Re: [ceph-users] About ceph.conf

2014-05-06 Thread Cao, Buddy
Thanks Sage, it clears my confusion especially for osd journal/data/keyring. And good to know ceph-disk is the right tool to use. Wei Cao (Buddy) -Original Message- From: Sage Weil [mailto:s...@inktank.com] Sent: Tuesday, May 6, 2014 10:18 PM To: Cao, Buddy Cc: ceph-us...@ceph.com Subj