The writes to the bucket with the post commit hook would be super
infrequent...maybe once every 10 ~ 20 minutes. The global rate of writes
to other buckets though would be pretty high though. The infrequent nature
of the writes to that bucket is what lead me to think this would not be an
issue. But every other infrequent write would kick off a query that would
have data.

F 

On 2/19/15, 5:08 PM, "Christopher Meiklejohn" <cmeiklej...@basho.com>
wrote:

>
>> On Feb 19, 2015, at 8:01 PM, Fred Grim <fg...@vmware.com> wrote:
>> 
>> Given a specific data blob I want to move a time series into a search
>> bucket.  So first I have to build out the time series and then move it
>> over.  Maybe I should use the rabbitmq post commit hook to send the data
>> somewhere else for the query to be run or something like that?
>
>Given your scenario, it seems that a portion of these writes would have
>MapReduce jobs that resulted in nothing happening ‹ I assume you only
>bucket the series every so many writes or time period, correct?
>
>I¹d highly recommend doing this externally, or identifying a method for
>pre-bucket¹ing the data given the rate of ingestion.
>
>- Chris
>
>Christopher Meiklejohn
>Senior Software Engineer
>Basho Technologies, Inc.
>cmeiklej...@basho.com


_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to