Hi Jerker,

Thanks for the reply.

The link you posted describes only object storage. I need information of raid 
levels implementation for block devices.


Thanks
Kumar

-----Original Message-----
From: Jerker Nyberg [mailto:jer...@update.uu.se] 
Sent: Friday, May 16, 2014 2:43 PM
To: Gnan Kumar, Yalla
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] raid levels (Information needed)


I would say the levels of redundancy could roughly be translated like this.

  RAID0          one replica (size=1)
  RAID1          two replicas (size=2)
  RAID10         two replicas (size=2)
  RAID5          erasure coding (erasure-code-m=1)
  RAID6          erasure coding (erasure-code-m=2)
  RAIDZ3         erasure coding (erasure-code-m=3)

Read more here:

http://ceph.com/docs/master/rados/operations/pools/

A seven disk RAID6 (4 data, 2 parity and 1 hot spare) would then be similar to 
a Ceph erasure coded pool on seven OSDs with erasure-code-k=4 and 
erasure-code-m=2.

Kind regards,
Jerker Nyberg.


On Fri, 16 May 2014, yalla.gnan.ku...@accenture.com wrote:

> Hi All,
>
> What are the kinds of raid levels of  storage provided by Ceph block devices ?
>
> Thanks
> Kumar
>
> ________________________________
>
> This message is for the designated recipient only and may contain privileged, 
> proprietary, or otherwise confidential information. If you have received it 
> in error, please notify the sender immediately and delete the original. Any 
> other use of the e-mail by you is prohibited. Where allowed by local law, 
> electronic communications with Accenture and its affiliates, including e-mail 
> and instant messaging (including content), may be scanned by our systems for 
> the purposes of information security and assessment of internal compliance 
> with Accenture policy.
> ______________________________________________________________________
> ________________
>
> www.accenture.com
>


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to