Re: [ceph-users] [solved] Changing CRUSH rule on a running cluster

2013-03-08 Thread Marco Aroldi
Hi Oliver, can you post here on the mailing list the steps taken? >From the IRC logs you said "if I use "choose osd", it works -- but "chooseleaf ... host" doesn't work" So, to have data balanced between 2 rooms, is the rule "step chooseleaf firstn 0 type room" correct? Thanks -- Marco 20

[ceph-users] CfP & CfW IEEE International Conference on Big Data

2013-03-08 Thread Christopher Kunz
Hi all, since this list has many members working with what we now call "Big Data" infrastructures and there's also a healthy amount of scientific members, I figured the following CfP might be of interest. I am in the PC and the program chair is my former PhD advisor, so it's not completely unrelat

Re: [ceph-users] [solved] Changing CRUSH rule on a running cluster

2013-03-08 Thread Olivier Bonvalet
Hi, yes, the «step chooseleaf firstn 0 type room» is correct. My problem was that «crushtool --test» was reporting «bad mapping» PG, and some tests on an *empty* pool gave me stucked PG in «active+remapped». After some tests and reading, I saw that : « For large clusters, some small percentages o

[ceph-users] Raw disks under OSDs or HW-RAID6 is better?

2013-03-08 Thread Mihály Árva-Tóth
Hello, We're planning 3 hosts, 12 HDDs in each host. Which is the better? If we set up 1 OSD - 1 HDD structure or create hardware RAID-6 all of 12 HDDs and only one OSD uses the whole disk space in one host? Thank you, Mike ___ ceph-users mailing list c

[ceph-users] Before you put journals on SSDs

2013-03-08 Thread Dimitri Maziuk
read https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault Dima ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Configuration file

2013-03-08 Thread waed Albataineh
I'm using v 0.56. If I dont wanna specify in the configuration file the osd mkfs osd mount and devs setings, are there gonna be a default values ??  ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.

Re: [ceph-users] Configuration file

2013-03-08 Thread John Wilkins
Waed, These are optional settings. If you specify them, Ceph will create the file system for you. If you are just performing a local install for testing purposes, you can omit the values. You'd need to have a 'devs' setting for each OSD in order for mkcephfs to build a file system for you. The def

[ceph-users] promising prospects 3/8/2013

2013-03-08 Thread Anthony Elias
Friend, My name is Anthony Elias; I invite you to partner with me a rewarding business venture, Details on your indication of interest. My email is: telias...@yahoo.com . Regards, Anthony ___ ceph-users mailing list ceph-users@lists.ceph.com http://li

[ceph-users] Planning for many small files

2013-03-08 Thread Rustam Aliyev
Hi, We need to store ~500M of small files (<1MB) and we were looking to RadosGW solution. We expect about 20 ops/sec (read+write). I'm trying to understand how Monitoring nodes store Crush maps and what are the limitations. For instance, is there any recommended max number of objects per Mo

Re: [ceph-users] I/O Speed Comparisons

2013-03-08 Thread Andrew Thrift
Mark, I would just like to add, we too are seeing the same behavior with QEMU/KVM/RBD. Maybe it is a common symptom of high IO with this setup. Regards, Andrew On 3/8/2013 12:46 AM, Mark Nelson wrote: On 03/07/2013 05:10 AM, Wolfgang Hennerbichler wrote: On 03/06/2013 02:31 PM, M