A memory based caching layer will be added to the frontends so the hot
files won't really be a problem. I just mentioned it because the
memory layer might fail and result in increased use of the data
source.

However, I am still not sure if it is possible to use MapReduce to get
a ordered list of the last accessed objects within a bucket. And the
vclock consistency check shouldn't be performed for certain requests.
I don't update any items and if so it wouldn't be a problem if an
outdated version would be served for a short period of time. It simply
consumes too much ressources if you are dealing with larger objects.

Am 4. März 2012 17:08 schrieb Paul Armstrong <p...@otoh.org>:
> On 04/03/2012, at 2:40, Philip <flip...@googlemail.com> wrote:
>
>> I am looking for a software to host millions of files in the order of
>> 2-20MB for web serving. A few (~5%) files are "hot" and accessed
>> heavily but most of the files are cold. The system is going to work
>> like a big cache and therefore I need to make sure that the cluster
>> won't go out of space while new files are beeing added. I'd like to
>> store the last time a file was accessed and delete the least accessed
>> files to make sure there is always 20-30% free space in the cluster.
>> This system is currently spread across alot of different standalone
>> servers and serves a few gbit of bandwidth.
>
> Couchbase is probably a better solution for your needs (let it eject things 
> based on LRU). Another option is to have memcache in front of Riak so your 
> hot objects are fast and kicked by LRU and you have a large, replicable store 
> behind it.

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to