Hi Kota,
Our production nodes are Riak CS 1.5 and Riak 1.4.x -- they're running
haproxy 1.4.x, and it's all been happy for some time now.
Testing the new nodes, still same haproxy versions, but Riak CS 2.0.1 and
Riak 1.0.5.
Very confused as to why the connections are being dropped when going
throu
Hi Kota,
I'm installing Riak CS upon Ubuntu 14.04 (trusty), and was doing this by
following the instructions to add the packagecloud.io Apt repository.
However that repository contains stanchion 1.5 rather than 2.0. (It does,
however, contain Riak CS 2.0.1)
Cheers
Toby
On Fri, 5 Jun 2015 at 13:0
Multi-backend bitcask with auto-expire for sure sounds like the most
future-proof solution.
For our part, we tend do delete keys in map-reduce jobs, since we have more
complex logic for determining when it is time to delete objects. In our
current setup, it takes about ~3 minutes to go through 1.5
Toby,
As PB connection management haven't been changed between CS 1.5 and
2.0, I think it should work. What's the version the load balancing
working stable? It depends of the reason why connection has been cut,
but I would recommend you restart just the CS node and recreate the
connection pool.
O
Toby, thank you for reporting. However, I could not figure out which
part of the doc is wrong and leading to Stanchion 1.5. Could you
elaborate more details?
Technically, it is correct to use Stanchion 2.0 with Riak CS 2.0.1.
And more, difference between Stanchion 2.0 and 1.5 is very small, it's
j
Well, I’ve been looking to make my theoretical Erlang knowledge less
theoretical and somewhat more practical, so I wouldn’t say no. And this
approach is pretty much what we thought we’d use originally.
Since then it has come to light that our product folks have given us permission
to just delet
We've got an expiry worker rig I can likely pass over offline. Its not
overly clever.
Basic idea stream a feed of keys into a pool of workers that spin off
delete calls.
We feed this based on continuous search's of an expiry TTL field in all
keys.
It'd likely be better to run this from with the E
Hi Sinh,
Just to double check, by '/solr.war/WEB-INF/lib', do you mean '/yokozuna-*/priv/solr/solr-webapp/webapp/WEB-INF/lib'? Because that's
where the jts file should go.
On Thu, Jun 4, 2015 at 6:49 AM, sinh nguyen wrote:
> Hello,
>
> I am trying to retrieve all locations within a provided pol
But then you need to use bitcask... And you have one ttl per backend I believe.
If that works for you...
-Alexander
@siculars
http://siculars.posthaven.com
Sent from my iRotaryPhone
> On Jun 4, 2015, at 12:24, Bryce Verdier wrote:
>
> I realize I'm kind of late to this party, but what about
Matt - I appreciate the follow up. I haven't seen my locking problem again
since we performed our updates. We are stable in production with rapidly
growing data (perhaps you have read about our new Hound app, which was just
announced and released in private beta on Android this week!)
Unfortuna
DOH! I should have known that answer wouldn't have been that simple.
Best of luck.
Warm regards,
Bryce
On Thu, 4 Jun 2015 13:46:28 -0400
Peter Herndon wrote:
> Thanks, Bryce, but auto-expire relies on bitcask being the back-end,
> and we’re on leveldb.
>
> > On Jun 4, 2015, at 1:24 PM, Bryce
Mmm, I think we’re looking at deleting about 50 million keys per day. That’s a
completely back-of-envelope estimate, I haven’t done the actual math yet.
—Peter
> On Jun 4, 2015, at 3:28 AM, Daniel Abrahamsson
> wrote:
>
> Hi Peter,
>
> What is "large-scale" in your case? How many keys do you
Thanks, Bryce, but auto-expire relies on bitcask being the back-end, and we’re
on leveldb.
> On Jun 4, 2015, at 1:24 PM, Bryce Verdier wrote:
>
> I realize I'm kind of late to this party, but what about
> using the auto-expire feature and letting riak do the deletion of data
> for you?
>
> The
I realize I'm kind of late to this party, but what about
using the auto-expire feature and letting riak do the deletion of data
for you?
The link is for an older version, but I know the
functionality still exists in riak2.
http://docs.basho.com/riak/latest/community/faqs/developing/#how-can-i-auto
To followup:
Since we use chef (configuration management), the riak configs are the same
across all our riak nodes (except for stuff like hostnames/IPs, etc.).
I ran riak_kv:repair and it looks like it had fixed the problem on node
004, but then a _different_ node (002) started to throw a bunch o
Hello,
I am trying to retrieve all locations within a provided polygon but I keep
having this error
http://localhost:8098/search/query/RiakPointTest?wt=json&indent=true&q=*:*&fq=Point_geo:"IsWithin(POLYGON((9.472992
76.540817, 9.441328 76.523651, 9.433708 76.555065, 9.458092 76.572403,
9.472992 7
Hi Peter,
What is "large-scale" in your case? How many keys do you need to delete,
and how often?
//Daniel
On Wed, Jun 3, 2015 at 9:54 PM, Peter Herndon wrote:
> Interesting thought. It might work for us, it might not, I’ll have to
> check with our CTO to see whether the expense makes sense un
17 matches
Mail list logo