Here's a (bad) mockup of the solution:

https://cloudup.com/cOMhcPry38U

Hope that this time I've made myself a little more clear :)

Regards


On 30 January 2014 23:04, Edgar Veiga <edgarmve...@gmail.com> wrote:

> Yes Eric, I understood :)
>
>
> On 30 January 2014 23:00, Eric Redmond <eredm...@basho.com> wrote:
>
>> For clarity, I was responding to Jason's assertion that Riak shouldn't be
>> used as a cache, not to your specific issue, Edgar.
>>
>> Eric
>>
>> On Jan 30, 2014, at 2:54 PM, Edgar Veiga <edgarmve...@gmail.com> wrote:
>>
>> Hi!
>>
>> I think that you are making some kind of confusion here... I'm not using
>> riak for cache purposes, thats exactly the opposite! Riak is my end
>> persistence system, I need to store the documents in a strong, secure,
>> available and consistent place. That's riak.
>>
>> It's like I've said before, just make an analogy with the linux file
>> cache system. Node.js workers simulate that in-memory cache, php
>> applications write and read from them and when something is dirty, it's
>> persisted to riak...
>>
>> Best regards
>>
>>
>>
>>
>> On 30 January 2014 22:26, Eric Redmond <eredm...@basho.com> wrote:
>>
>>> Actually people use Riak as a distributed cache all the time. In fact,
>>> many customers use it exclusively as a cache system. Not all backends write
>>> to disk. Riak supports a main memory backend[1], complete with size limits
>>> and TTL.
>>>
>>> Eric
>>>
>>> [1]: http://docs.basho.com/riak/latest/ops/advanced/backends/memory/
>>>
>>>
>>> On Jan 30, 2014, at 1:48 PM, Jason Campbell <xia...@xiaclo.net> wrote:
>>>
>>> I'm not sure Riak is the best fit for this.  Riak is great for
>>> applications where it is the source of data, and has very strong
>>> consistency when used in this way.  You are using it as a cache, where Riak
>>> will be significantly slower than other cache solutions.  Especially since
>>> you say that each worker will have a set of documents it is responsible
>>> for.  Something like a local memcache or redis would likely suit this use
>>> case just as well, but do it much faster with less overhead.
>>>
>>> Riak will guarantee 3 writes to disk (by default), where something like
>>> memcache or redis will stay in memory, and if local, won't have network
>>> latency either.  In the worst case where a node goes offline, the real data
>>> can be pulled from the backend again, so it isn't a big deal.  It will also
>>> simplify your application, because node.js can always request from cache
>>> and not worry about the speed, instead of maintaining it's own cache layer.
>>>
>>> I'm as happy as the next person on this list to see Riak being used for
>>> all sorts of uses, but I believe in the right tool for the right job.
>>>  Unless there is something I don't understand, Riak is probably the wrong
>>> tool.  It will work, but there is other software that will work much better.
>>>
>>> I hope this helps,
>>> Jason Campbell
>>>
>>> ----- Original Message -----
>>> From: "Edgar Veiga" <edgarmve...@gmail.com>
>>> To: "Russell Brown" <russell.br...@me.com>
>>> Cc: "riak-users" <riak-users@lists.basho.com>
>>> Sent: Friday, 31 January, 2014 3:20:42 AM
>>> Subject: Re: last_write_wins
>>>
>>>
>>>
>>> I'll try to explain this the best I can, although it's a simples
>>> architecture I'm not describing it in my native language :)
>>>
>>>
>>> I have a set of node.js workers (64 for now) that serve as a
>>> cache/middleware layer for a dozen of php applications. Each worker deals
>>> with a set of documents (it's not a distributed cache system). Each worker
>>> updates the documents in memory, and tags them as dirty (just like OS file
>>> cache), and from time to time (for now, it's a 5 seconds window interval),
>>> a persister module will deal with the persistence of those dirty documents
>>> to riak.
>>> If the document isn't in memory, it will be fetched from riak.
>>>
>>>
>>> If you want document X, you need to ask to the corresponding worker
>>> dealing with it. Two different workers, don't deal with the same document.
>>> That way we can guarantee that there will be no concurrent writes to
>>> riak.
>>>
>>>
>>> Best Regards,
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 30 January 2014 10:46, Russell Brown < russell.br...@me.com > wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 30 Jan 2014, at 10:37, Edgar Veiga < edgarmve...@gmail.com > wrote:
>>>
>>>
>>>
>>> Also,
>>>
>>>
>>> Using last_write_wins = true, do I need to always send the vclock while
>>> on a PUT request? In the official documention it says that riak will look
>>> only at the timestamp of the requests.
>>>
>>>
>>> Ok, from what you've said it sounds like you are always wanting to
>>> replace what is at a key with the new information you are putting. If that
>>> is the case, then you have the perfect use case for LWW=true. And indeed,
>>> you do not need to pass a vclock with your put request. And it sounds like
>>> there is no need for you to fetch-before-put since that is only to get
>>> context /resolve siblings. Curious about your use case if you can share
>>> more.
>>>
>>>
>>> Cheers
>>>
>>>
>>> Russell
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Best regards,
>>>
>>>
>>>
>>> On 29 January 2014 10:29, Edgar Veiga < edgarmve...@gmail.com > wrote:
>>>
>>>
>>>
>>> Hi Russel,
>>>
>>>
>>> No, it doesn't depend. It's always a new value.
>>>
>>>
>>> Best regards
>>>
>>>
>>>
>>>
>>>
>>> On 29 January 2014 10:10, Russell Brown < russell.br...@me.com > wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 29 Jan 2014, at 09:57, Edgar Veiga < edgarmve...@gmail.com > wrote:
>>>
>>>
>>>
>>> tl;dr
>>>
>>>
>>> If I guarantee that the same key is only written with a 5 second
>>> interval, is last_write_wins=true profitable?
>>>
>>> It depends. Does the value you write depend in anyway on the value you
>>> read, or is it always that you are just getting a totally new value that
>>> replaces what is in Riak (regardless what is in Riak)?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 27 January 2014 23:25, Edgar Veiga < edgarmve...@gmail.com > wrote:
>>>
>>>
>>>
>>> Hi there everyone!
>>>
>>>
>>> I would like to know, if my current application is a good use case to
>>> set last_write_wins to true.
>>>
>>>
>>> Basically I have a cluster of node.js workers reading and writing to
>>> riak. Each node.js worker is responsible for a set of keys, so I can
>>> guarantee some kind of non distributed cache...
>>> The real deal here is that the writing operation is not run evertime an
>>> object is changed but each 5 seconds in a "batch insertion/update" style.
>>> This brings the guarantee that the same object cannot be write to riak at
>>> the same time, not event at the same seconds, there's always a 5 second
>>> window between each insertion/update.
>>>
>>>
>>> That said, is it profitable to me if I set last_write_wins to true? I've
>>> been facing some massive writting delays under high loads and it would be
>>> nice if I have some kind of way to tune riak.
>>>
>>>
>>> Thanks a lot and keep up the good work!
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to