Hi guys,
I have three pools: ID0-pool1,ID1-pool2 and ID2-pool3.On cephfs, I would like
to mount each pool to a directory ie. ID0-pool1 to the /srv/pools/pool1,
ID1-pool2 to the /srv/pools/pool2, ID2-pool3 to the /srv/pools/pool3
with this command: cephfs /srv/pools/pool(pool_ID)/ set_layout -
Hi guys,
I have three pools: ID0-pool1,ID1-pool2 and ID2-pool3.On cephfs, I would like
to mount each pool to a directory ie. ID0-pool1 to the /srv/pools/pool1,
ID1-pool2 to the /srv/pools/pool2, ID2-pool3 to the /srv/pools/pool3
with this command: cephfs /srv/pools/pool(pool_ID)/ set_layout -p
Hi guys,
I wonder how does store objects. Consider the writing obj process, IMO,
osd first get obj data from client, then the primary osd get the data,
store the data in journal(with sync) , after that the primary osd spray the
obj to other replica osd simultaneously, and each replica osd will wri
Hi guys,
I wonder how does store objects. Consider the writing obj process, IMO,
osd first get obj data from client, then the primary osd get the data,
store the data in journal(with sync) , after that the primary osd spray the
obj to other replica osd simultaneously, and each replica osd will wri
On Mon, 20 Jan 2014, Arnulf Heimsbakk wrote:
> Hi,
>
> I'm trying to understand the CRUSH algorithm and how it distribute data.
> Let's say I simplify a small datacenter setup and map it up
> hierarchically in the crush map as show below.
>
>root datacenter
> /\
Hi,
I'm trying to understand the CRUSH algorithm and how it distribute data.
Let's say I simplify a small datacenter setup and map it up
hierarchically in the crush map as show below.
root datacenter
/\
/ \
/\
a b
On Sun, 19 Jan 2014, Guang wrote:
> Thanks Sage.
>
> I just captured part of the log (it was fast growing), the process did
> not hang but I saw the same pattern repeatedly. Should I increase the
> log level and send over email (it constantly reproduced)?
Sure! A representative fragment of the
Hi,
I have created a rbd and mounted it and started using it as a raw
device. Now in ceph every data is replicated. Does ceph replicate each
and every file I created in the rbd or does it replicate the rbd image
only? Also is there any command to find out where these replicas are
stored?
Thanks and
On Mon, 20 Jan 2014, Ara Sadoyan wrote:
> Hi list,
>
> We are on a process of developing custom storage application on top of Ceph.
> We will store all metadata in Solr for fast search for paths and get files
> only with calling direct links.
> My question is: Which one is better solution for that
Hi list,
We are on a process of developing custom storage application on top of Ceph.
We will store all metadata in Solr for fast search for paths and get
files only with calling direct links.
My question is: Which one is better solution for that propose? Use
CephFS or RadosGW.
Which on e will
Hi,
Really noone an idea? At the moment I have set the flag noout and the one osd
dieing stopped, so I have a working cluster. At the moment two OSDs are down
because I tried to mark the one dying as lost and let it recover to other osds.
But that does no good, I just run into this problem with
11 matches
Mail list logo