hi Evgeny

I appreciate the input.  The concern with HDFS is that it has own
share of problems - its name node, which essentially a metadata
server, load all files information into memory (roughly 300 MB per
million files) and its failure handling is far less attractive ... on
top of configuring and maintaining two separate components and two API
for handling data. I am still holding out hopes that there might be
some better way of go about it?

Best Regards,

Ruby

On Thu, Aug 25, 2011 at 11:10 AM, Evgeniy Ryabitskiy
<evgeniy.ryabits...@wikimart.ru> wrote:
> Hi,
>
> If you want to store files with partition/replication, you could use
> Distributed File System(DFS).
> Like http://hadoop.apache.org/hdfs/
> or any other:
> http://en.wikipedia.org/wiki/Distributed_file_system
>
> Still you could use Cassandra to store any metadata and filepath in DFS.
>
> So: Cassandra + HDFS would be my solution.
>
> Evgeny.
>
>

Reply via email to