Hi,

Writes will be distributed every 4MB (size of IMAGEV1 RBD object)
IMAGEV2 not fully supported on KRBD (but you can customize size of object and 
striping)

You need to take :
- SSD SATA 6gbits
- or SSD SAS 12gbits (more expensive) 



Florent Monthel





> Le 2 févr. 2015 à 18:29, mad Engineer <themadengin...@gmail.com> a écrit :
> 
> Thanks Florent,
>                        can ceph distribute write to multiple hosts?
> 
> On Mon, Feb 2, 2015 at 10:17 PM, Florent MONTHEL <fmont...@flox-arts.net> 
> wrote:
>> Hi Mad
>> 
>> 3Gbps so you will have SSD Sata ?
>> I think you should take 6Gbps controllers to make sure so not have Sata 
>> limitations
>> Thanks
>> 
>> Sent from my iPhone
>> 
>>> On 2 févr. 2015, at 09:27, mad Engineer <themadengin...@gmail.com> wrote:
>>> 
>>> I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
>>> on each server.Each server will have 10G NIC.
>>> SSD disks are of good quality and as per label it can support ~300 MBps
>>> 
>>> What are the limiting factor that prevents from utilizing full speed
>>> of SSD disks?
>>> 
>>> Disk  controllers are 3 Gbps,so if i am not wrong this is the maximum
>>> i can achieve per host.Can ceph distribute write parallely and over
>>> come this limit of 3Gbps controller and thus fully utilize the
>>> capability of ssd disks.
>>> 
>>> I have a working 3 node ceph setup deployed using ceph-deploy using
>>> latest firefly and 3.16 kernel but this is on low quality SATA disks
>>> and i am planning to upgrade to ssd
>>> 
>>> can some one please help me in understanding this better.
>>> 
>>> Thanks
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to