Map/reduce aside, in the general case, I do time series in Riak with
deterministic materialized keys at specific time granularities. Ie.
/devices/deviceID_MMDDHHMM[SS]
So my device or app stack will drop data into a one second resolution key (if
second resolution is needed) into Riak memor
The writes to the bucket with the post commit hook would be super
infrequent...maybe once every 10 ~ 20 minutes. The global rate of writes
to other buckets though would be pretty high though. The infrequent nature
of the writes to that bucket is what lead me to think this would not be an
issue. But
> On Feb 19, 2015, at 8:01 PM, Fred Grim wrote:
>
> Given a specific data blob I want to move a time series into a search
> bucket. So first I have to build out the time series and then move it
> over. Maybe I should use the rabbitmq post commit hook to send the data
> somewhere else for the q
Given a specific data blob I want to move a time series into a search
bucket. So first I have to build out the time series and then move it
over. Maybe I should use the rabbitmq post commit hook to send the data
somewhere else for the query to be run or something like that?
F
On 2/19/15, 4:55 P
> On Feb 19, 2015, at 5:48 PM, Fred Grim wrote:
>
> Does anyone have example code doing a map reduce inside a post commit hook?
> Can I use the local_client for this?
While this is most likely possible, we’d advise against it, as once write load
increases slightly, you could easily overload
Does anyone have example code doing a map reduce inside a post commit hook?
Can I use the local_client for this?
Fred
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com