[ceph-users] Problems related to CephFS

2014-01-20 Thread Sherry Shahbazi
Hi guys, I have three pools: ID0-pool1,ID1-pool2 and ID2-pool3.On cephfs, I would like to mount each pool to a directory ie. ID0-pool1 to the /srv/pools/pool1, ID1-pool2 to the /srv/pools/pool2, ID2-pool3 to the /srv/pools/pool3 with this command: cephfs /srv/pools/pool(pool_ID)/ set_layout -

[ceph-users] Problems related to CephFS

2014-01-20 Thread Sherry Shahbazi
Hi guys, I have three pools: ID0-pool1,ID1-pool2 and ID2-pool3.On cephfs, I would like to mount each pool to a directory ie. ID0-pool1 to the /srv/pools/pool1, ID1-pool2 to the /srv/pools/pool2, ID2-pool3 to the /srv/pools/pool3 with this command: cephfs /srv/pools/pool(pool_ID)/ set_layout -p

[ceph-users] how does ceph handle object writes?

2014-01-20 Thread Tim Zhang
Hi guys, I wonder how does store objects. Consider the writing obj process, IMO, osd first get obj data from client, then the primary osd get the data, store the data in journal(with sync) , after that the primary osd spray the obj to other replica osd simultaneously, and each replica osd will wri

[ceph-users] how does ceph handle object writes?

2014-01-20 Thread Tim Zhang
Hi guys, I wonder how does store objects. Consider the writing obj process, IMO, osd first get obj data from client, then the primary osd get the data, store the data in journal(with sync) , after that the primary osd spray the obj to other replica osd simultaneously, and each replica osd will wri

Re: [ceph-users] Question about CRUSH object placement

2014-01-20 Thread Sage Weil
On Mon, 20 Jan 2014, Arnulf Heimsbakk wrote: > Hi, > > I'm trying to understand the CRUSH algorithm and how it distribute data. > Let's say I simplify a small datacenter setup and map it up > hierarchically in the crush map as show below. > >root datacenter > /\

[ceph-users] Question about CRUSH object placement

2014-01-20 Thread Arnulf Heimsbakk
Hi, I'm trying to understand the CRUSH algorithm and how it distribute data. Let's say I simplify a small datacenter setup and map it up hierarchically in the crush map as show below. root datacenter /\ / \ /\ a b

Re: [ceph-users] Ceph cluster is unreachable because of authentication failure

2014-01-20 Thread Sage Weil
On Sun, 19 Jan 2014, Guang wrote: > Thanks Sage. > > I just captured part of the log (it was fast growing), the process did > not hang but I saw the same pattern repeatedly. Should I increase the > log level and send over email (it constantly reproduced)? Sure! A representative fragment of the

Re: [ceph-users] query regarding replicas of data

2014-01-20 Thread kiriti krishna
Hi, I have created a rbd and mounted it and started using it as a raw device. Now in ceph every data is replicated. Does ceph replicate each and every file I created in the rbd or does it replicate the rbd image only? Also is there any command to find out where these replicas are stored? Thanks and

Re: [ceph-users] CephFS or RadosGW

2014-01-20 Thread Sage Weil
On Mon, 20 Jan 2014, Ara Sadoyan wrote: > Hi list, > > We are on a process of developing custom storage application on top of Ceph. > We will store all metadata in Solr for fast search for paths and get files > only with calling direct links. > My question is: Which one is better solution for that

[ceph-users] CephFS or RadosGW

2014-01-20 Thread Ara Sadoyan
Hi list, We are on a process of developing custom storage application on top of Ceph. We will store all metadata in Solr for fast search for paths and get files only with calling direct links. My question is: Which one is better solution for that propose? Use CephFS or RadosGW. Which on e will

Re: [ceph-users] One OSD always dieing

2014-01-20 Thread Rottmann, Jonas (centron GmbH)
Hi, Really noone an idea? At the moment I have set the flag noout and the one osd dieing stopped, so I have a working cluster. At the moment two OSDs are down because I tried to mark the one dying as lost and let it recover to other osds. But that does no good, I just run into this problem with