On Tue, Oct 23, 2012 at 10:02 AM, Rapsey <rap...@gmail.com> wrote:

> There is also another trick you can use. Pick a number. Assign every app
> server you have a number between 1 and N. The number assigned to the server
> is your starting ID, then increment by N every time you generate an ID from
> that server. The only limitation is that you have to know in advance how
> big N can get (it has to be larger than the number of your app servers).
>
>
> Sergej
>

Yes, this seems to be a nice alternative, the only limitation (as you have
mentioned) would be the value of N. This may become a problem later on. I
am apprehensive of imposing any design limitation. I will think more about
it.

Thanks,
Shashwat



>
>
> On Tue, Oct 23, 2012 at 6:00 AM, Shashwat Srivastava <dark...@gmail.com>wrote:
>
>> Thank you Guido. Yes, a secondary index based on date would be immensely
>> helpful for me to navigate via date. I will do this. An incremental message
>> id would be helpful for me to get last 50 messages and so forth. I will use
>> another db for this. Thanks for all your help.
>>
>> Shashwat
>>
>>
>> On Mon, Oct 22, 2012 at 2:46 PM, Guido Medina 
>> <guido.med...@temetra.com>wrote:
>>
>>>  Don't overkill it with technology, you could use Riak with a simple 2i
>>> index (integer index YYYYMMDD for the message date so you can search day by
>>> day backward), and for the message sequence or identifier you could either
>>> user ANY SQL database sequence or a UUID generator.
>>>
>>> HTH,
>>>
>>> Guido.
>>>
>>>
>>> On 22/10/12 10:04, Rapsey wrote:
>>>
>>>
>>> On Mon, Oct 22, 2012 at 10:29 AM, Shashwat Srivastava <dark...@gmail.com
>>> > wrote:
>>>
>>>>
>>>>  Now, each bucket would have conversation between two users or of a
>>>> room of a site. The conversation rate for (some) rooms is very high, some
>>>> 20,000 - 30,000 messages per hour. We have observed that users usually
>>>> don't access conversations past one week. So, if a bucket has conversation
>>>> of 3 years, then mostly users would access the recent conversation upto a
>>>> week or month. Can riak handle this easily? Also, would riak use RAM wisely
>>>> in this scenario? Would it only keep keys and indexes, corresponding to
>>>> recent messages per bucket, in RAM?
>>>>
>>>>
>>>  Leveldb backend should.
>>>
>>>
>>>> Finally, what is the best approach for creating keys in a bucket?
>>>> Earlier, I was planning to use timestamp (in milliseconds). But in a room
>>>> there can be multiple messages at the same time. As I understand I cannot
>>>> have a unique incremental message id per bucket (as riak has write
>>>> capability in all nodes in a cluster so consistency is not guareented).
>>>> Please correct me if I am wrong. One other way could be to let riak
>>>> generate key and I use timestamp as a secondary index. But this seems to be
>>>> a bad design. Also, what would be the best way to achieve pagination for
>>>> this use case?
>>>>
>>>>
>>>  You could use redis for incremental id's.
>>>
>>>
>>>
>>>  Sergej
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing 
>>> listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>>
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to