s/aren't/are/  :)


Met vriendelijke groet,

Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend

t: (+31) 299 410 414
e: caspars...@supernas.eu
w: www.supernas.eu

2018-03-01 16:31 GMT+01:00 David Turner <drakonst...@gmail.com>:

> This aspect of osds has not changed from filestore with SSD journals to
> bluestore with DB and WAL soon SSDs. If the SSD fails, all osds using it
> aren't lost and need to be removed from the cluster and recreated with a
> new drive.
>
> You can never guarantee data integrity on bluestore or filestore if any
> media of the osd fails completely.
>
>
> On Thu, Mar 1, 2018, 10:24 AM Hervé Ballans <herve.ball...@ias.u-psud.fr>
> wrote:
>
>> Hello,
>>
>> With Bluestore, I have a couple of questions regarding the case of
>> separate partitions for block.wal and block.db.
>>
>> Let's take the case of an OSD node that contains several OSDs (HDDs) and
>> also contains one SSD drive for storing WAL partitions and an another
>> one for storing DB partitions. In this configuration, from my
>> understanding (but I may be wrong), each SSD drive appears as a SPOF for
>> the entire node.
>>
>> For example, what happens if one of the 2 SSD drives crashes (I know,
>> it's very rare but...) ?
>>
>> In this case, are the bluestore data on all the OSDs of the same node
>> also lost ?
>>
>> I guess so, but as a result, what is the recovery scenario ? Will it be
>> necessary to entirely recreate the node (OSDs + block.wal + block.db) to
>> rebuild all the replicas from the other nodes on it ?
>>
>> Thanks in advance,
>> Hervé
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to