node1: 4[TB], node2: 4[TB], node3: 4[TB] :)
22 авг. 2014 г. 12:53 пользователь "idzzy" <idez...@gmail.com> написал:

> Hi Irek,
>
> Understood.
>
> Let me ask about only this.
>
> > No, it's for the entire cluster.
>
> Is this meant that total disk amount size of all nodes is over than 11.8
> TB?
> e.g  node1: 4[TB], node2: 4[TB], node3: 4[TB]
>
> not each node.
> e.g  node1: 11.8[TB], node2: 11.8[TB], node3:11.8 [TB]
>
> Thank you.
>
>
> On August 22, 2014 at 5:06:02 PM, Irek Fasikhov (malm...@gmail.com) wrote:
>
> I recommend you use replication, because radosgw uses asynchronous
> replication.
>
> Yes divided by nearfull ratio.
> No, it's for the entire cluster.
>
>
> 2014-08-22 11:51 GMT+04:00 idzzy <idez...@gmail.com>:
>
>>  Hi,
>>
>>  If not use replication, Is it only to divide by nearfull_ratio?
>>  (does only radosgw support replication?)
>>
>> 10T/0.85 = 11.8 TB of each node?
>>
>>  # ceph pg dump | egrep "full_ratio|nearfulll_ratio"
>>  full_ratio 0.95
>> nearfull_ratio 0.85
>>
>>  Sorry I’m not familiar with ceph architecture.
>>  Thanks for the reply.
>>
>>  —
>>  idzzy
>>
>> On August 22, 2014 at 3:53:21 PM, Irek Fasikhov (malm...@gmail.com)
>> wrote:
>>
>>  Hi.
>>
>> 10ТB*2/0.85 ~= 24 TB with two replications, total volume for the raw data.
>>
>>
>>
>>
>
>
> --
> С уважением, Фасихов Ирек Нургаязович
> Моб.: +79229045757
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to