hi Robert -

This is quite interesting. Now CassandraFS on google.code seems
inactive now. I don't see any release out of that.

Do you know if Brisk is considered stable at all or still very experimental?

thanks

Ruby


On Thu, Aug 25, 2011 at 12:44 PM, Robert Jackson
<robe...@promedicalinc.com> wrote:
> I believe this is conceptually similar to what Brisk is doing under
> CassandraFS (HDFS compliant file system on top of cassandra).
>
> Robert Jackson
>
> [1] - https://github.com/riptano/brisk
> ________________________________
> From: "Sasha Dolgy" <sdo...@gmail.com>
> To: user@cassandra.apache.org
> Sent: Thursday, August 25, 2011 12:36:21 PM
> Subject: Re: Is Cassandra suitable for this use case?
>
> You can chunk the files into pieces and store the pieces in Cassandra...
> Munge all the pieces back together when delivering back to the client...
>
> On Aug 25, 2011 6:33 PM, "Ruby Stevenson" <ruby...@gmail.com> wrote:
>> hi Evgeny
>>
>> I appreciate the input. The concern with HDFS is that it has own
>> share of problems - its name node, which essentially a metadata
>> server, load all files information into memory (roughly 300 MB per
>> million files) and its failure handling is far less attractive ... on
>> top of configuring and maintaining two separate components and two API
>> for handling data. I am still holding out hopes that there might be
>> some better way of go about it?
>>
>> Best Regards,
>>
>> Ruby
>>
>> On Thu, Aug 25, 2011 at 11:10 AM, Evgeniy Ryabitskiy
>> <evgeniy.ryabits...@wikimart.ru> wrote:
>>> Hi,
>>>
>>> If you want to store files with partition/replication, you could use
>>> Distributed File System(DFS).
>>> Like http://hadoop.apache.org/hdfs/
>>> or any other:
>>> http://en.wikipedia.org/wiki/Distributed_file_system
>>>
>>> Still you could use Cassandra to store any metadata and filepath in DFS.
>>>
>>> So: Cassandra + HDFS would be my solution.
>>>
>>> Evgeny.
>>>
>>>
>
>

Reply via email to