you have to explain what you mean by "JBOD". All in one large vdisk?
Separate drives?

At the end of the day, if a device fails in a way that the data housed on
that device (or array) is no longer available, that HDFS storage is marked
down. HDFS now needs to create a 3rd replicant. Various timers control how
long HDFS waits to see if the device comes back on line. But assume
immediately for convenience. Remember that a write is to a (random) copy of
the data, and that datanode then replicates to the next node, and so forth.
The in-process-of-being-created 3rd copy will also get those delete
"updates". Have you read up on how "deleting" a record works?

<======>
Be the reason someone smiles today.
Or the reason they need a drink.
Whichever works.

*Daemeon C.M. Reiydelle*

*email: daeme...@gmail.com <daeme...@gmail.com>*
*San Francisco 1.415.501.0198/London 44 020 8144 9872/Skype
daemeon.c.m.reiydelle*



On Tue, Aug 14, 2018 at 6:10 AM Christian Lorenz <
christian.lor...@webtrekk.com> wrote:

> Hi,
>
>
>
> given a cluster with RF=3 and CL=LOCAL_ONE and application is deleting
> data, what happens if the nodes are setup with JBOD and one disk fails? Do
> I get consistent results while the broken drive is replaced and a nodetool
> repair is running on the node with the replaced drive?
>
>
>
> Kind regards,
>
> Christian
>

Reply via email to