[ceph-users] How client choose among replications?

2014-02-11 Thread JinHwan Hwang
I thought that Ceph evenly distribute client request among replications. But I've seen example rule set that assign primary replication for ssd and other two for hdd. Can i assign something like read affinity for replication? Or Ceph always choose primary replication first except the primary crash

Re: [ceph-users] How client choose among replications?

2014-02-11 Thread Wido den Hollander
On 02/11/2014 10:06 AM, JinHwan Hwang wrote: I thought that Ceph evenly distribute client request among replications. But I've seen example rule set that assign primary replication for ssd and other two for hdd. Can i assign something like read affinity for replication? Or Ceph always choose pri

[ceph-users] ceph write data to disk with a slow speed

2014-02-11 Thread Tim Zhang
Hello, I test my ceph cluster and monitoring the disk utilization while writing into the cluster. First I test my disk can write with a speed of 60MB/s using fio (bs=64k threads=128) when formating it as xfs file system. After that I integrate it to ceph cluster. During writing data into rbd image

Re: [ceph-users] keyring generation

2014-02-11 Thread Kei.masumoto
(2014/02/10 23:33), Alfredo Deza wrote: On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto wrote: (2014/02/05 23:49), Alfredo Deza wrote: On Mon, Feb 3, 2014 at 11:28 AM, Kei.masumoto wrote: Hi Alfredo, Thanks for your reply! I think I pasted all logs from ceph.log, but anyway, I re-executed "c

Re: [ceph-users] ceph write data to disk with a slow speed

2014-02-11 Thread Mark Nelson
On 02/11/2014 06:10 AM, Tim Zhang wrote: Hello, I test my ceph cluster and monitoring the disk utilization while writing into the cluster. First I test my disk can write with a speed of 60MB/s using fio (bs=64k threads=128) when formating it as xfs file system. After that I integrate it to ceph

Re: [ceph-users] keyring generation

2014-02-11 Thread Alfredo Deza
On Tue, Feb 11, 2014 at 7:57 AM, Kei.masumoto wrote: > (2014/02/10 23:33), Alfredo Deza wrote: >> >> On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto >> wrote: >>> >>> (2014/02/05 23:49), Alfredo Deza wrote: On Mon, Feb 3, 2014 at 11:28 AM, Kei.masumoto wrote: > > Hi Alfredo,

[ceph-users] pg is stuck unclean since forever

2014-02-11 Thread Philipp von Strobl-Albeg
Hi Dietmar, do you have changed/check the crush-map ? I've had the same problem with a new cluster until i edit/reedit the crush-map. Best Philipp ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-c

Re: [ceph-users] poor data distribution

2014-02-11 Thread Dominik Mostowiec
Hi, If this problem (with stucked active+remapped pgs after reweight-by-utilisation) affects all ceph configurations or only specific ones? If specific: what is the reason in my case? Is this caused by crush configuration (cluster architecture, crush tunnables, ...), cluster size, architecture desi

Re: [ceph-users] ceph interactive mode tab completion

2014-02-11 Thread Dan Mick
Neither bash completion nor internal completion are yet functioning, no; there was some work to head in that direction but it was never finished. On 02/03/2014 02:28 PM, Ben Sherman wrote: Hello all, I noticed ceph has an interactive mode. I did quick search and I don't see that tab completio

Re: [ceph-users] How client choose among replications?

2014-02-11 Thread Kyle Bader
> Why would it help? Since it's not that ONE OSD will be primary for all objects. There will be 1 Primary OSD per PG and you'll probably have a couple of thousands PGs. The primary may be across a oversubscribed/expensive link, in which case a local replica with a common ancestor to the client may

Re: [ceph-users] How client choose among replications?

2014-02-11 Thread Gregory Farnum
There are a few more options that are in current dev releases for directing traffic to replicas, but it remains pretty specialized and probably won't be supported past the direct librados client layer for Firefly (unless somebody's prioritized it for RGW or RBD that I haven't heard about). -Greg So

[ceph-users] Planning a home ceph cluster

2014-02-11 Thread Ethan Levine
Hey all, I've been planning building myself a server cluster as a sort of hobby project, and I've decided to use Ceph for its storage system. I have a few questions, though. My plan is to build 3 relatively dense servers (20 drive bays each) and fill each one with relatively consumer equipme

Re: [ceph-users] poor data distribution

2014-02-11 Thread Sage Weil
On Wed, 12 Feb 2014, Dominik Mostowiec wrote: > Hi, > If this problem (with stucked active+remapped pgs after > reweight-by-utilisation) affects all ceph configurations or only > specific ones? > If specific: what is the reason in my case? Is this caused by crush > configuration (cluster architectu

Re: [ceph-users] Planning a home ceph cluster

2014-02-11 Thread Mark Nelson
On 02/11/2014 08:03 PM, Ethan Levine wrote: Hey all, Hi! I've been planning building myself a server cluster as a sort of hobby project, and I've decided to use Ceph for its storage system. I have a few questions, though. My plan is to build 3 relatively dense servers (20 drive bays each) a

Re: [ceph-users] pg is stuck unclean since forever

2014-02-11 Thread Dietmar Maurer
> do you have changed/check the crush-map ? The crush map is OK (not changed). ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Use of daily-created/-deleted pools

2014-02-11 Thread Hyangtack Lee
I'm new to Ceph, and looking for a new storage to replace legacy system. My system has a lot of files accessing temporarily for 2 or 3 days. Those files are uploaded from many clients everyday, and batch job deletes unused files everyday. In this case, can I use Ceph's pool to store daily uploade

Re: [ceph-users] Use of daily-created/-deleted pools

2014-02-11 Thread Jean-Charles Lopez
Hi Lee You could use an Ceph RBD device on a server and export a directory that you would have created on this RBD though NFS. 3 days after the files are uploade, you could snapshot the RBD device, delete the directory containing the files, and a week later, when sure you do not need the snapshot

[ceph-users] unable to start OSD

2014-02-11 Thread Dietmar Maurer
I am unable to start my OSDs on one node: > osd/PGLog.cc: 672: FAILED assert(last_e.version.version < e.version.version) Does that mean there is something wrong with my journal disk? Or why can such thing happen? Here is the OSD log: ... 2014-02-12 07:04:39.376993 7f8236afe780 0 cls/hello/cls

Re: [ceph-users] Use of daily-created/-deleted pools

2014-02-11 Thread Hyangtack Lee
Hi Lopez, Thank you for recommendation! I will try to do it. :) Regards, Hyangtack On Wed, Feb 12, 2014 at 2:28 PM, Jean-Charles Lopez wrote: > Hi Lee > > You could use an Ceph RBD device on a server and export a directory that > you would have created on this RBD though NFS. > > 3 days after t