> Op 20 december 2016 om 3:24 schreef Gerald Spencer <ger.spenc...@gmail.com>:
> 
> 
> Hello all,
> 
> We're currently waiting on a delivery of equipment for a small 50TB proof
> of concept cluster, and I've been lurking/learning a ton from you. Thanks
> for how active everyone is.
> 
> Question(s):
> How does the raids gateway work exactly?

The RGW doesn't do any RAID. It chunks up larger objects into smaller RADOS 
chunks. The first chunk is always 512k (IIRC) and then it chunks up into 4MB 
RADOS objects.

> Does it introduce a single point of failure?

It does if you deploy only one RGW. Always deploy multiple with loadbalancing 
in front.

> Does all of the traffic go through the host running the rgw server?

Yes it does.

> 
> I just don't fully understand that side of things. As for architecture our
> poc will have:
> - 1 monitor
> - 4 OSDs with 12 x 6TB drives, 1 x 800 PCIe journal
> 

Underscaled machines, go for less disks per machine but more machines. More 
smaller machines works a lot better with Ceph then a few big machines.

> I'd all goes as planned, this will scale up to:
> - 3 monitors

Always run with 3 MONs. Otherwise it is a serious SPOF.

> - 48 osds
> 
> This should give us enough storage (~1.2PB) wth enough throughput to handle
> the data requirements of our machines to saturate our 100Gb link...
>

That won't happen with just 4 machines. Replica 3x taken into account is well. 
You will need a lot more machines to get the 100Gb link fully utilized.

Wido
 
> 
> 
> 
> 
> Cheers,
> G
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to