Re: [ceph-users] Large storage nodes - best practices

2013-08-06 Thread Gilles Mocellin
Le 06/08/2013 02:57, James Harper a écrit : In the previous email, you are forgetting Raid1 has a write penalty of 2 since it is mirroring and now we are talking about different types of raid and nothing really to do about Ceph. One of the main advantages of Ceph is to have data replicated so y

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread James Harper
> > In the previous email, you are forgetting Raid1 has a write penalty of 2 > since it > is mirroring and now we are talking about different types of raid and nothing > really to do about Ceph. One of the main advantages of Ceph is to have data > replicated so you don't have to do Raid to that d

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Scottix
In the previous email, you are forgetting Raid1 has a write penalty of 2 since it is mirroring and now we are talking about different types of raid and nothing really to do about Ceph. One of the main advantages of Ceph is to have data replicated so you don't have to do Raid to that degree. I am su

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread James Harper
> I am looking at evaluating ceph for use with large storage nodes (24-36 SATA > disks per node, 3 or 4TB per disk, HBAs, 10G ethernet). > > What would be the best practice for deploying this? I can see two main > options. > > (1) Run 24-36 osds per node. Configure ceph to replicate data to one o

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Mike Dawson
On 8/5/2013 12:51 PM, Brian Candler wrote: On 05/08/2013 17:15, Mike Dawson wrote: Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to cons

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Brian Candler
On 05/08/2013 17:15, Mike Dawson wrote: Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to consider. Mark Nelson, the Ceph performance

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Mike Dawson
Brian, Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks to consider. Mark Nelson, the Ceph performance guy at Inktank, has published sever