On Tue, Oct 23, 2012 at 3:26 PM, Joshua Muzaaya <joshm...@gmail.com> wrote:
> http://couchbase.com CouchBase server has this as a configuration and > works exactly how riak would. However, using a different storage for > incremental ids will present challenges. Have you carefully considered > Couchbase , CouchDB or Big Couch ? > > <http://www.linkedin.com/pub/muzaaya-joshua/39/2ba/202> > Designed with WiseStamp - > <http://r1.wisestamp.com/r/landing?u=cf16262215eb8784&v=3.11.21&t=1350986076607&promo=10&dest=http%3A%2F%2Fwww.wisestamp.com%2Femail-install%3Futm_source%3Dextension%26utm_medium%3Demail%26utm_campaign%3Dpromo_10>Get > yours<http://r1.wisestamp.com/r/landing?u=cf16262215eb8784&v=3.11.21&t=1350986076607&promo=10&dest=http%3A%2F%2Fwww.wisestamp.com%2Femail-install%3Futm_source%3Dextension%26utm_medium%3Demail%26utm_campaign%3Dpromo_10> > > > Yes, I had gone through couchbase guide earlier. I started with this link - http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis. My main intention is to have a fault tolerance system across multiple datacenters. After going through the documentation of LevelDB backend of Riak, it seems appealing to me. I have not yet tested it. Also, the case studies of Riak (in general) seems promising. I haven't reviewed couchbase properly. I will check it out again. Thanks, Shashwat > > On Tue, Oct 23, 2012 at 7:32 AM, Rapsey <rap...@gmail.com> wrote: > >> There is also another trick you can use. Pick a number. Assign every app >> server you have a number between 1 and N. The number assigned to the server >> is your starting ID, then increment by N every time you generate an ID from >> that server. The only limitation is that you have to know in advance how >> big N can get (it has to be larger than the number of your app servers). >> >> >> Sergej >> >> >> On Tue, Oct 23, 2012 at 6:00 AM, Shashwat Srivastava >> <dark...@gmail.com>wrote: >> >>> Thank you Guido. Yes, a secondary index based on date would be immensely >>> helpful for me to navigate via date. I will do this. An incremental message >>> id would be helpful for me to get last 50 messages and so forth. I will use >>> another db for this. Thanks for all your help. >>> >>> Shashwat >>> >>> >>> On Mon, Oct 22, 2012 at 2:46 PM, Guido Medina >>> <guido.med...@temetra.com>wrote: >>> >>>> Don't overkill it with technology, you could use Riak with a simple 2i >>>> index (integer index YYYYMMDD for the message date so you can search day by >>>> day backward), and for the message sequence or identifier you could either >>>> user ANY SQL database sequence or a UUID generator. >>>> >>>> HTH, >>>> >>>> Guido. >>>> >>>> >>>> On 22/10/12 10:04, Rapsey wrote: >>>> >>>> >>>> On Mon, Oct 22, 2012 at 10:29 AM, Shashwat Srivastava < >>>> dark...@gmail.com> wrote: >>>> >>>>> >>>>> Now, each bucket would have conversation between two users or of a >>>>> room of a site. The conversation rate for (some) rooms is very high, some >>>>> 20,000 - 30,000 messages per hour. We have observed that users usually >>>>> don't access conversations past one week. So, if a bucket has conversation >>>>> of 3 years, then mostly users would access the recent conversation upto a >>>>> week or month. Can riak handle this easily? Also, would riak use RAM >>>>> wisely >>>>> in this scenario? Would it only keep keys and indexes, corresponding to >>>>> recent messages per bucket, in RAM? >>>>> >>>>> >>>> Leveldb backend should. >>>> >>>> >>>>> Finally, what is the best approach for creating keys in a bucket? >>>>> Earlier, I was planning to use timestamp (in milliseconds). But in a room >>>>> there can be multiple messages at the same time. As I understand I cannot >>>>> have a unique incremental message id per bucket (as riak has write >>>>> capability in all nodes in a cluster so consistency is not guareented). >>>>> Please correct me if I am wrong. One other way could be to let riak >>>>> generate key and I use timestamp as a secondary index. But this seems to >>>>> be >>>>> a bad design. Also, what would be the best way to achieve pagination for >>>>> this use case? >>>>> >>>>> >>>> You could use redis for incremental id's. >>>> >>>> >>>> >>>> Sergej >>>> >>>> >>>> _______________________________________________ >>>> riak-users mailing >>>> listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >>>> >>>> >>>> >>>> _______________________________________________ >>>> riak-users mailing list >>>> riak-users@lists.basho.com >>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >>>> >>>> >>> >>> >>> >>> _______________________________________________ >>> riak-users mailing list >>> riak-users@lists.basho.com >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >>> >>> >> >> _______________________________________________ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> >> > > > -- > *Muzaaya Joshua > Systems Engineer > +256774115170* > *"Through it all, i have learned to trust in Jesus. To depend upon His > Word" > * > > > >
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com