Map/reduce aside, in the general case, I do time series in Riak with
deterministic materialized keys at specific time granularities. Ie.
/devices/deviceID_MMDDHHMM[SS]
So my device or app stack will drop data into a one second resolution key (if
second resolution is needed) into Riak memor
The writes to the bucket with the post commit hook would be super
infrequent...maybe once every 10 ~ 20 minutes. The global rate of writes
to other buckets though would be pretty high though. The infrequent nature
of the writes to that bucket is what lead me to think this would not be an
issue. But
> On Feb 19, 2015, at 8:01 PM, Fred Grim wrote:
>
> Given a specific data blob I want to move a time series into a search
> bucket. So first I have to build out the time series and then move it
> over. Maybe I should use the rabbitmq post commit hook to send the data
> somewhere else for the q
Given a specific data blob I want to move a time series into a search
bucket. So first I have to build out the time series and then move it
over. Maybe I should use the rabbitmq post commit hook to send the data
somewhere else for the query to be run or something like that?
F
On 2/19/15, 4:55 P
> On Feb 19, 2015, at 5:48 PM, Fred Grim wrote:
>
> Does anyone have example code doing a map reduce inside a post commit hook?
> Can I use the local_client for this?
While this is most likely possible, we’d advise against it, as once write load
increases slightly, you could easily overload
Does anyone have example code doing a map reduce inside a post commit hook?
Can I use the local_client for this?
Fred
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi Luke,
Your prediction was spot on.
Today, about 2 weeks after setting the parameters from my earlier email, we
saw riak nodes start to run out of memory and fail.
I have to say, it was a great 2 weeks though.
I've adjusted the parameters again, this time paying attention the linked
spreadshe
Ok. I’m guessing that you’re running a default cluster on one node.
Did you attempt the read-repair I suggested earlier in the thread?
Also, you should try and get each node’s value for the bucket/key and see if
they’re consistent. You can use
https://github.com/basho/riak_kv/blob/d17409fdb934
I absolutely agree. That is why we've change the setting vm.swappiness to 1
so it swaps only when absolutely necessary. I think we underestimated how
much swap may be needed, but I also don't understand why so much hungry on
memory.
Is there a particular activity, like 2i queries, AAE or levelDB c
Daniel,
Our tuning guides specifically recommend disabling swap. If your machine is
so memory-starved that it needs to use swap during normal Riak operation,
having it on is only going to make things worse.
http://docs.basho.com/riak/latest/ops/tuning/linux/#Storage-and-File-System-Tuning
On Thu
We are using levelDB as backend without any tuning.
Also we are aware that performance may suffer due to potentially storing
some of the copies (n=3) twice on the server. We are not so much concerned
about latencies caused by that.
What is worrying though is almost unbounded growth of swap used, wh
I can't recall right now, but it's safe to assume I did delete it.
As for nodes, I have added one, nothing more.
Kind regards,
Cezary
2015-02-19 18:01 GMT+01:00 Zeeshan Lakhani :
> So, AAE is running.
>
> Again, did you delete the single object at some point? Trying to see if
> this is related
So, AAE is running.
Again, did you delete the single object at some point? Trying to see if this is
related to you hitting a tombstone on queries. Also, when you added the object,
did you add it and later leave (drop) a node from your cluster?
Thanks.
Zeeshan Lakhani
programmer |
software en
By './data/yz_anti_entropy' do you mean '/var/lib/riak/yz_anti_entropy' by
default or './data/yz_anti_entropy' inside each index's directory? If the
former - it's there, the latter - not. riak-admin search aae-status says
there's been some AAE activity in the past few hours.
Also I called yz_entro
Thanks Cezary.
Have you deleted this object at some point in your runs? Please make sure AAE
is running by checking search’s AAE status, `riak-admin search aae-status`, and
that data exists in the correct directory, `./data/yz_anti_entropy`
(http://docs.basho.com/riak/latest/ops/advanced/confi
I have the exact same issue with regular http search queries, so I guess
I'll just describe that part.
I've got a bucket of maps-of-sets, 2 of them are entityId_set and
timestamps_set. Its search index is called 'job' and it's only this bucket
that's indexed.
When I run
curl
"localhost:8098/sear
Daniel, you may be aware of this, but a 3-node Riak cluster is not
recommended and may be playing a minor role in your resource problems.
Every request will hit every server (except for some requests that are
being made twice against a single server, making I/O that much worse), and
depending on yo
Hello Cezary,
Firstly, are you able to retrieve your search result consistently when not
using doing a mapreduce job?
To better help out, can you send a gist of the mapreduce code you’re running?
Thanks.
> On Feb 18, 2015, at 9:13 PM, Cezary Kosko wrote:
>
> Hi,
>
> I've got a search inde
> On Feb 19, 2015, at 9:05 AM, Daniel Iwan wrote:
>
> Hi
> On 3 node cluster Ubuntu 12.04, nodes 8GB RAM all nodes show 6GB taken
> beam.smp, 2GB by our process.
> beam started swapping and currently is using 23GB of swap space.
> vm.swappiness is set to 1
> We are using ring 128. /var/lib/ria
> On Feb 19, 2015, at 9:47 AM, Daniel Iwan wrote:
>
> My ideas:
> 1. Rewrite (read-write) object with new values for all indexes
> 2. Enable siblings on a bucket, write empty object with update for your
> index, that will create sibling.
> Then whenever you read object do merge of object+indexes
My ideas:
1. Rewrite (read-write) object with new values for all indexes
2. Enable siblings on a bucket, write empty object with update for your
index, that will create sibling.
Then whenever you read object do merge of object+indexes. This may be more
appropriate if you have big objects and want t
> On Feb 19, 2015, at 9:06 AM, Jose G. Quenum
> wrote:
>
> Hi all,
> I'm getting into a scenario where I'd like to use a secondary index on a
> particular field in my riak object/data. However the value of the field might
> change albeit very very rarely. So I was thinking I could just reinde
Hi all,
I'm getting into a scenario where I'd like to use a secondary index on a
particular field in my riak object/data. However the value of the field might
change albeit very very rarely. So I was thinking I could just reindex using
the new value. What's the right syntax to delete and add a n
Hi
On 3 node cluster Ubuntu 12.04, nodes 8GB RAM all nodes show 6GB taken
beam.smp, 2GB by our process.
beam started swapping and currently is using 23GB of swap space.
vm.swappiness is set to 1
We are using ring 128. /var/lib/riak is 37GB in size 11GB of which is used
by anti-entropy
Is there a
24 matches
Mail list logo