If you can handle the slower IO of S3 this can work, but you will have a
window of out of date images. YOu don't have a concept of persistent
snapshots.

<======>
Life lived is not about the size of the dog in the fight:
It is about the size of the fight in the dog.

*Daemeon Reiydelle*

*email: daeme...@gmail.com <daeme...@gmail.com>*
*San Francisco 1.415.501.0198/Skype daemeon.c.m.reiydelle*



On Thu, Dec 5, 2019 at 2:06 PM Jon Haddad <j...@jonhaddad.com> wrote:

> You can easily do this with bcache or LVM
> http://rustyrazorblade.com/post/2018/2018-04-24-intro-to-lvm/.
>
> Medusa might be a good route to go down if you want to do backups instead:
> https://thelastpickle.com/blog/2019/11/05/cassandra-medusa-backup-tool-is-open-source.html
>
>
>
> On Thu, Dec 5, 2019 at 12:21 PM Carl Mueller
> <carl.muel...@smartthings.com.invalid> wrote:
>
>> Does anyone have experience tooling written to support this strategy:
>>
>> Use case: run cassandra on i3 instances on ephemerals but synchronize the
>> sstables and commitlog files to the cheapest EBS volume type (those have
>> bad IOPS but decent enough throughput)
>>
>> On node replace, the startup script for the node, back-copies the
>> sstables and commitlog state from the EBS to the ephemeral.
>>
>> As can be seen:
>> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
>>
>> the (presumably) spinning rust tops out at 2375 MB/sec (using
>> multiple EBS volumes presumably) that would incur about a ten minute delay
>> for node replacement for a 1TB node, but I imagine this would only be used
>> on higher IOPS r/w nodes with smaller densities, so 100GB would be about a
>> minute of delay only, already within the timeframes of an AWS node
>> replacement/instance restart.
>>
>>
>>

Reply via email to