Le 06/08/2013 02:57, James Harper a écrit :
In the previous email, you are forgetting Raid1 has a write penalty of 2 since
it
is mirroring and now we are talking about different types of raid and nothing
really to do about Ceph. One of the main advantages of Ceph is to have data
replicated so y
>
> In the previous email, you are forgetting Raid1 has a write penalty of 2
> since it
> is mirroring and now we are talking about different types of raid and nothing
> really to do about Ceph. One of the main advantages of Ceph is to have data
> replicated so you don't have to do Raid to that d
In the previous email, you are forgetting Raid1 has a write penalty of 2
since it is mirroring and now we are talking about different types of raid
and nothing really to do about Ceph. One of the main advantages of Ceph is
to have data replicated so you don't have to do Raid to that degree. I am
su
> I am looking at evaluating ceph for use with large storage nodes (24-36 SATA
> disks per node, 3 or 4TB per disk, HBAs, 10G ethernet).
>
> What would be the best practice for deploying this? I can see two main
> options.
>
> (1) Run 24-36 osds per node. Configure ceph to replicate data to one o
On 8/5/2013 12:51 PM, Brian Candler wrote:
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
cons
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
consider.
Mark Nelson, the Ceph performance
Brian,
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to consider.
Mark Nelson, the Ceph performance guy at Inktank, has published sever